Motivation:
#9224 introduced overrides of ByteBufUtil#writeUtf8(...) and related
methods to operate on a sub-CharSequence directly to save having to
allocate substrings, but it missed an edge case where the subsequence
does not extend to the end of the CharSequence and the last char in the
sequence is a high surrogate.
Due to the catch-IndexOutOfBoundsException optimization that avoids an
additional bounds check, it would be possible to read past the specified
end char index and successfully decode a surrogate pair which would
otherwise result in a '?' byte being written.
Modifications:
- Check for end-of-subsequence before reading next char after a high
surrogate is encountered in the
writeUtf8(AbstractByteBuf,int,CharSequence,int,int) and
utf8BytesNonAscii methods
- Add unit test for this edge case
Result:
Bug is fixed.
This removes the bounds-check-avoidance optimization but it does not
appear to have a measurable impact on benchmark results, including when
the char sequence contains many surrogate pairs (which should be rare in
any case).
Motivation:
If the HttpUtil.getCharset method is called with an illegal charset like
"charset=!illegal!" it throws an IllegalCharsetNameException. But the javadoc
states, that defaultCharset is returned if incorrect header value. Since the
client sending the request sets the header value this should not crash.
Modification:
HttpUtil.getCharset catches the IllegalCharsetNameException and returns the
defualt value.
Result:
HttpUtil.getCharset does not throw IllegalCharsetNameException any more.
Motivation:
On openSUSE (probably more), 64 bit builds use lib64, e.g. /usr/lib64, and
configure picks this up and builds the native library in
native-build/target/lib64 where maven is not looking.
Modifications:
Explicitly specify --libdir=${project.build.directory}/native-build/target/lib
during configuration.
Result:
Maven uses the correct lib directory.
Motivation:
Since both projects (to some extend) rely on classifier parsing via the
os-maven-plugin, they should ideally use the same version in case the parsing
changed.
Modifications:
Upgrade os-maven-plugin from 1.6.0 to 1.6.2
Result:
Same os-maven-plugin with same parsing logic.
…ryWebSocketFrames
Motivation:
`Utf8FrameValidator` is always created and added to the pipeline in `WebSocketServerProtocolHandler.handlerAdded` method. However, for websocket connection with only `BinaryWebSocketFrame`'s UTF8 validator is unnecessary overhead. Adding of `Utf8FrameValidator` could be easily avoided by extending of `WebSocketDecoderConfig` with additional property.
Specification requires UTF-8 validation only for `TextWebSocketFrame`.
Modification:
Added `boolean WebSocketDecoderConfig.withUTF8Validator` that allows to avoid adding of `Utf8FrameValidator` during pipeline initialization.
Result:
Less overhead when using only `BinaryWebSocketFrame`within web socket.
Look like `EmbeddedChannelPipeline` should also override `onUnhandledInboundMessage(ChannelHandlerContext ctx, Object msg)` in order to do not print "Discarded message pipeline" because in case of `EmbeddedChannelPipeline` discarding actually not happens.
This fixes next warning in the latest netty version with websocket and `WebSocketServerCompressionHandler`:
```
13:36:36.231 DEBUG- Decoding WebSocket Frame opCode=2
13:36:36.231 DEBUG- Decoding WebSocket Frame length=5
13:36:36.231 DEBUG- Discarded message pipeline : [JdkZlibDecoder#0, DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xembedded, L:embedded - R:embedded].
```
Modification:
Override correct method
Result:
Follow up fix after https://github.com/netty/netty/pull/9286
Motivation:
Our QA servers are spammed with this messages:
13:57:51.560 DEBUG- Decoding WebSocket Frame opCode=1
13:57:51.560 DEBUG- Decoding WebSocket Frame length=4
I think this is too much info for debug level. It is better to move it to trace level.
Modification:
logger.debug changed to logger.trace for WebSocketFrameEncoder/Decoder
Result:
Less messages in Debug mode.
Motivation:
306299323c introduced some code change to move the responsibility of creating the stream for the upgrade to Http2FrameCodec. Unfortunaly this lead to the situation of having newStream().setStreamAndProperty(...) be called twice. Because of this we only ever saw the channelActive(...) on Http2StreamChannel but no other events as the mapping was replaced on the second newStream().setStreamAndProperty(...) call.
Beside this we also did not use the correct handler for the upgrade stream in some cases
Modifications:
- Just remove the Http2FrameCodec.onHttpClientUpgrade() method and so let the base class handle all of it. The stream is created correctly as part of the ConnectionListener implementation of Http2FrameCodec already.
- Consolidate logic of creating stream channels
- Adjust unit test to capture the bug
Result:
Fixes https://github.com/netty/netty/issues/9395
Motivation:
We did miss to call reclaimSpace(...) in one case which can lead to the situation of having the Recycler to not correctly reclaim space and so just create new objects when not needed.
Modifications:
Correctly call reclaimSpace(...)
Result:
Recycler correctly reclaims space in all situations.
Motivation:
When using the HTTP/2 multiplex implementation we need to ensure we correctly drain the buffered inbound data even if the RecvByteBufallocator.Handle tells us to stop reading in between.
Modifications:
Correctly loop through the buffered inbound data until the user does stop to request from it.
Result:
Fixes https://github.com/netty/netty/issues/9387.
Co-authored-by: Bryce Anderson <banderson@twitter.com>
Motivation:
We can easily reuse the Http2FrameStreamEvent instances and so reduce GC pressure as there may be multiple events per streams over the life-time.
Modifications:
Reuse instances
Result:
Less allocations
Motivation:
In many places Netty uses Unpooled.buffer(0) while should use EMPTY_BUFFER. We can't change this due to back compatibility in the constructors but can use Unpooled.EMPTY_BUFFER in some cases to ensure we not allocate at all. In others we can directly use the allocator either from the Channel / ChannelHandlerContext or the request / response.
Modification:
- Use Unpooled.EMPTY_BUFFER where possible
- Use allocator where possible
Result:
Fixes#9345 for websockets and http package
Motivation:
At the moment we lookup the ChannelHandlerContext used in Http2StreamChannelBootstrap each time the open(...) method is invoked. This is not needed and we can just cache it for later usage.
Modifications:
Cache ChannelHandlerContext in volatile field.
Result:
Speed up open(...) method implementation when called multiple times
Motivation:
We need to ensure we place the encoder before the decoder when doing the websockets upgrade as the decoder may produce a close frame when protocol violations are detected.
Modifications:
- Correctly place encoder before decoder
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9300
Motivation:
If a read triggers a AbstractHttp2StreamChannel to close we can
get an NPE in the read loop.
Modifications:
Make sure that the inboundBuffer isn't null before attempting to
continue the loop.
Result:
No NPE.
Fixes#9337
Motivation:
We recently made a change to use ET for the eventfd and not trigger a read each time. This testcase proves everything works as expected.
Modifications:
Add testcase that verifies thqat the wakeups happen correctly
Result:
More tests
Motivation:
If the encoded value of a form element happens to exactly hit
the chunk limit (8096 bytes), the post request encoder will
throw a NullPointerException.
Modifications:
Catch the null case and return.
Result:
No NPE.
Motivation:
The Http2FrameCodec should be responsible to create the upgrade stream.
Modifications:
Move code to create stream to Http2FrameCodec
Result:
More correct responsibility
Motivation
The AbstractEpollStreamChannel::spliceTo(FileDescriptor, ...) methods
take an offset parameter but this was effectively ignored due to what
looks like a typo in the corresponding JNI function impl. Instead it
would always use the file's own native offset.
Modification
- Fix typo in netty_epoll_native_splice0() and offset accounting in
AbstractEpollStreamChannel::SpliceFdTask.
- Modify unit test to include an invocation of the public spliceTo
method using non-zero offset.
Result
spliceTo FD methods work as expected when an offset is provided.
Motivation:
While fixing #9359 found few places that could be patched / improved separately.
Modification:
On handshake response generation - throw exception before allocating response objects if request is invalid.
Result:
No more leaks when exception is thrown.
Motivation:
Mark Http2StreamChannelBootstrap.open0(...) as deprecated as the user should not use it. It was marked as public by mistake.
Modifications:
Add deprecation warning.
Result:
User will be aware the method should not be used directly.
Motivation:
There are situations where the user may want to be more flexible when to send the PING acks for various reasons or be able to attach a listener to the future that is used for the ping ack. To be able to do so we should allow to manage the acks manually.
Modifications:
- Add constructor to DefaultHttp2ConnectionDecoder that allows to disable the automatically sending of ping acks (default is to send automatically to not break users)
- Add methods to AbstractHttp2ConnectionHandlerBuilder (and sub-classes) to either enable ot disable auto acks for pings
- Make DefaultHttp2PingFrame constructor public that allows to write acks.
- Add unit test
Result:
More flexible way of handling acks.
Motivation
I noticed this while looking at something else.
AbstractEpollStreamChannel::spliceQueue is an MPSC queue but only
accessed from the event loop. So it could be just changed to e.g. an
ArrayDeque. This PR instead reverts to using is as an MPSC queue to
avoid dispatching a task to the EL, as appears was the original
intention.
Modification
Change AbstractEpollStreamChannel::spliceQueue to be volatile and lazily
initialized via double-checked locking. Add tasks directly to the queue
from the public methods rather than possibly waking the EL just to
enqueue.
An alternative is just to change PlatformDependent.newMpscQueue() to new
ArrayDeque() and be done with it :)
Result
Less disruptive channel/fd-splicing.
Motivation:
Based on https://tools.ietf.org/html/rfc6455#section-1.3 - for non-browser
clients, Origin header field may be sent if it makes sense in the context of those clients.
Modification:
Replace Sec-WebSocket-Origin to Origin
Result:
Fixes#9134 .
Motivation:
Recently I'm going to build MQTT broker and client based on Netty. I had MQTT encoder and decoder founded, while no basic examples. So I'm going to share my simple heartBeat MQTT broker and client as an example.
Modification:
New MQTT heartBeat example under io.netty.example/mqtt/heartBeat/.
Result:
Client would send CONNECT and PINGREQ(heartBeat message).
- CONNECT: once channel active
- PINGREQ: once IdleStateEvent triggered, which is 20 seconds in this example
Client would discard all messages it received.
MQTT broker could handle CONNECT, PINGREQ and DISCONNECT messages.
- CONNECT: send CONNACK back
- PINGREQ: send PINGRESP back
- DISCONNECT: close the channel
Broker would close the channel if 2 heartBeat lost, which set to 45 seconds in this example.
Motivation
Debugging SSL/TLS connections through wireshark is a pain -- if the cipher used involves Diffie-Hellman then it is essentially impossible unless you can have the client dump out the master key [1]
This is a work-in-progress change (tests & comments to come!) that introduces a new handler you can set on the SslContext to receive the master key & session id. I'm hoping to get feedback if a change in this vein would be welcomed.
An implementation that conforms to Wireshark's NSS key log[2] file is also included.
Depending on feedback on the PR going forward I am planning to "clean it up" by adding documentation, example server & tests. Implementation will need to be finished as well for retrieving the master key from the OpenSSL context.
[1] https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
[2] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format
Modification
- Added SslMasterKeyHandler
- An implementation of the handler that conforms to Wireshark's key log format is included.
Result:
Be able to debug SSL / TLS connections more easily.
Signed-off-by: Farid Zakaria <farid.m.zakaria@gmail.com>
Motivation:
Netty homepage(netty.io) serves both "http" and "https".
It's recommended to use https than http.
Modification:
I changed from "http://netty.io" to "https://netty.io"
Result:
No effects.
Motivation:
Http2ConnectionHandler (and sub-classes) allow to configure a graceful shutdown timeout but only apply it if there is at least one active stream. We should always apply the timeout. This is also true when we try to send a GO_AWAY and close the connection because of an connection error.
Modifications:
- Always apply the timeout if one is configured
- Add unit test
Result:
Always respect gracefulShutdownTimeoutMillis
Motivation:
b3dba317d7 introduced the concept of Http2SettingsReceivedConsumer but did not correctly inplement DecoratingHttp2ConnectionEncoder.consumeRemoteSettings(...).
Modifications:
- Add missing `else` around the throws
- Add unit tests
Result:
Correctly implement DecoratingHttp2ConnectionEncoder.consumeRemoteSettings(...)
Motivation
The nice change made by @carl-mastrangelo in #9307 for lookup-table
based HPACK Huffman decoding can be simplified a little to remove the
separate flags field and eliminate some intermediate operations.
Modification
Simplify HpackHuffmanDecoder::decode logic including de-dup of the
per-nibble part.
Result
Less code, possibly better performance though not noticeable in a quick
benchmark.
Motivation:
We don't need the extra ChannelPromise when writing headers anymore in Http2FrameCodec. This also means we cal re-use a ChannelFutureListener and so not need to create new instances all the time.
Modifications:
- Just pass the original ChannelPromise when writing headers
- Reuse the ChannelFutureListener
Result:
Two less objects created when writing headers for an not-yet created stream.
Motivation:
The previous used maxHeaderListSize was too low which resulted in exceptions during the benchmark run:
```
io.netty.handler.codec.http2.Http2Exception: Header size exceeded max allowed size (8192)
at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:103)
at io.netty.handler.codec.http2.Http2Exception.headerListSizeError(Http2Exception.java:188)
at io.netty.handler.codec.http2.Http2CodecUtil.headerListSizeExceeded(Http2CodecUtil.java:231)
at io.netty.handler.codec.http2.HpackDecoder$Http2HeadersSink.finish(HpackDecoder.java:545)
at io.netty.handler.codec.http2.HpackDecoder.decode(HpackDecoder.java:132)
at io.netty.handler.codec.http2.HpackDecoderBenchmark.decode(HpackDecoderBenchmark.java:85)
at io.netty.handler.codec.http2.generated.HpackDecoderBenchmark_decode_jmhTest.decode_thrpt_jmhStub(HpackDecoderBenchmark_decode_jmhTest.java:120)
at io.netty.handler.codec.http2.generated.HpackDecoderBenchmark_decode_jmhTest.decode_Throughput(HpackDecoderBenchmark_decode_jmhTest.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
```
Also we should ensure we only use ascii for header names.
Modifications:
Just use Integer.MAX_VALUE as limit
Result:
Be able to run benchmark without exceptions
Motivation:
There are is some unnecessary code (like toString() calls) which can be cleaned up.
Modifications:
- Remove not needed toString() calls
- Simplify subString(...) calls
- Remove some explicit casts when not needed.
Result:
Cleaner code
Motivation:
ff0045e3e1 changed HpackHuffmanDecoder to use a lookup-table which greatly improved performance. We can squeeze out another 3% win by using an ByteProcessor which will reduce the number of bound-checks / reference-count-checks needed by processing byte-by-byte.
Modifications:
Implement logic with ByteProcessor
Result:
Another ~3% perf improvement which shows up when using h2load to simulate load.
`h2load -c 100 -m 100 --duration 60 --warm-up-time 10 http://127.0.0.1:8080`
Before:
```
finished in 70.02s, 620051.67 req/s, 20.70MB/s
requests: 37203100 total, 37203100 started, 37203100 done, 37203100 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 37203100 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.21GB (1302108500) total, 41.84MB (43872600) headers (space savings 90.00%), 460.24MB (482598600) data
min max mean sd +/- sd
time for request: 404us 24.52ms 15.93ms 1.45ms 87.90%
time for connect: 0us 0us 0us 0us 0.00%
time to 1st byte: 0us 0us 0us 0us 0.00%
req/s : 6186.64 6211.60 6199.00 5.18 65.00%
```
With this change:
```
finished in 70.02s, 642103.33 req/s, 21.43MB/s
requests: 38526200 total, 38526200 started, 38526200 done, 38526200 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 38526200 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.26GB (1348417000) total, 42.39MB (44444900) headers (space savings 90.00%), 466.25MB (488893900) data
min max mean sd +/- sd
time for request: 370us 24.89ms 15.52ms 1.35ms 88.02%
time for connect: 0us 0us 0us 0us 0.00%
time to 1st byte: 0us 0us 0us 0us 0.00%
req/s : 6407.06 6435.19 6419.74 5.62 67.00%
```
Motivation:
In the latest release we introduced Http2MultiplexHandler as a replacement of Http2MultiplexCodec. This did split the frame parsing from the multiplexing to allow a more flexible way to handle frames and to make the code cleaner. Unfortunally we did miss to special handle this in Http2ServerUpgradeCodec and so did not correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...). This did lead to the situation that we did not correctly receive the event on the Http2MultiplexHandler and so did not correctly created the Http2StreamChannel for the upgrade stream. Because of this we ended up with an NPE if a frame was dispatched to the upgrade stream later on.
Modifications:
- Correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...)
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9314.
Motivation:
2c99fc0f12 introduced a change that eagly recycles the queue. Unfortunally it did not correct protect against re-entrance which can cause a NPE.
Modifications:
- Correctly protect against re-entrance by adding null checks
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9319.
Motivation:
When decoding DnsRecord, if the record contains compression pointers, and not all compression pointers are decompressed, but part of the pointers are decompressed. Then when encoding the record, the compressed pointer will point to the wrong location, resulting in bad label problem.
Modification:
Pre-decompressed record RData that may contain compression pointers.
Result:
Fixes#8962
* Correctly take length of ByteBufInputStream into account for readLine() / readByte()
Motivation:
ByteBufInputStream did not correctly take the length into account when validate bounds for readLine() / readByte() which could lead to read more then allowed.
Modifications:
- Correctly take length into account
- Add unit tests
- Fix existing unit test
Result:
Correctly take length of ByteBufInputStream into account.
Related to https://github.com/netty/netty/pull/9306.
On servers with many pipelines or dynamic pipelines, it is easy for end user to make mistake during pipeline configuration. Current message:
`Discarded inbound message PooledUnsafeDirectByteBuf(ridx: 0, widx: 2, cap: 2) that reached at the tail of the pipeline. Please check your pipeline configuration.`
Is not always meaningful and doesn't allow to find the wrong pipeline quickly.
Modification:
Added additional log placeholder that identifies pipeline handlers and channel info. This will allow for the end users quickly find the problem pipeline.
Result:
Meaningful warning when the message reaches the end of the pipeline. Fixes#7285
Motivation:
buffer.isReadable() should not be used to limit the amount of data that can be read as the amount may be less then was is readable.
Modification:
- Use available() which takes the length into account
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9305
Motivation:
The AbstractSniHandler previously was willing to tolerate up to three
non-handshake records before a ClientHello that contained an SNI
extension field. This is, so far as I can tell, completely
unnecessary: no TLS implementation will be sending alerts or change
cipher spec messages before ClientHello.
Given that it was not possible to determine why this loop is in
the code to begin with, it's probably just best to remove it.
Modifications:
Remove the for loop.
Result:
The AbstractSniHandler will more rapidly determine whether it should
pass the records on to the default SSL handler.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>