Commit Graph

9769 Commits

Author SHA1 Message Date
Nico Kruber
17f10d6a92 Update os-maven-plugin version to match netty-tcnative (#9409)
Motivation:

Since both projects (to some extend) rely on classifier parsing via the
os-maven-plugin, they should ideally use the same version in case the parsing
changed.

Modifications:

Upgrade os-maven-plugin from 1.6.0 to 1.6.2

Result:

Same os-maven-plugin with same parsing logic.
2019-08-03 12:38:35 +02:00
Dmitriy Dumanskiy
7c5a3c5898 Allow to turn off Utf8FrameValidator creation for websocket with Bina… (#9417)
…ryWebSocketFrames

Motivation:

`Utf8FrameValidator` is always created and added to the pipeline in `WebSocketServerProtocolHandler.handlerAdded` method. However, for websocket connection with only `BinaryWebSocketFrame`'s UTF8 validator is unnecessary overhead. Adding of `Utf8FrameValidator` could be easily avoided by extending of `WebSocketDecoderConfig` with additional property.

Specification requires UTF-8 validation only for `TextWebSocketFrame`.

Modification:

Added `boolean WebSocketDecoderConfig.withUTF8Validator` that allows to avoid adding of `Utf8FrameValidator` during pipeline initialization.

Result:

Less overhead when using only `BinaryWebSocketFrame`within web socket.
2019-08-03 12:35:47 +02:00
Dmitriy Dumanskiy
005fd55b88 #7285 Improved "Discarded inbound message" warning for embedded channel (#9414)
Motivation:

Look like `EmbeddedChannelPipeline` should also override `onUnhandledInboundMessage(ChannelHandlerContext ctx, Object msg)` in order to do not print "Discarded message pipeline" because in case of `EmbeddedChannelPipeline` discarding actually not happens.

This fixes next warning in the latest netty version with websocket and `WebSocketServerCompressionHandler`:

```
13:36:36.231 DEBUG- Decoding WebSocket Frame opCode=2
13:36:36.231 DEBUG- Decoding WebSocket Frame length=5
13:36:36.231 DEBUG- Discarded message pipeline : [JdkZlibDecoder#0, DefaultChannelPipeline$TailContext#0]. Channel : [id: 0xembedded, L:embedded - R:embedded].
```

Modification:

Override correct method

Result:
Follow up fix after https://github.com/netty/netty/pull/9286
2019-08-03 12:20:41 +02:00
Dmitriy Dumanskiy
1194577bd0 Decrease the level of logging in WebSocketFrameEncoder/Decoder (#9415)
Motivation:

Our QA servers are spammed with this messages:

13:57:51.560 DEBUG- Decoding WebSocket Frame opCode=1
13:57:51.560 DEBUG- Decoding WebSocket Frame length=4
I think this is too much info for debug level. It is better to move it to trace level.

Modification:

logger.debug changed to logger.trace for WebSocketFrameEncoder/Decoder

Result:

Less messages in Debug mode.
2019-08-03 12:14:40 +02:00
root
718b7626e6 [maven-release-plugin] prepare for next development iteration 2019-07-24 09:05:57 +00:00
root
465c900c04 [maven-release-plugin] prepare release netty-4.1.38.Final 2019-07-24 09:05:23 +00:00
Per Lundberg
aa032b8aea Future.java: Fix typos in Javadoc (#9391)
Motivation:

Docs should have no typos

Modifications:

Fix a few typos

Result:

More correct docs.
2019-07-24 07:23:29 +02:00
Norman Maurer
513e9f2893
HTTP/2: Ensure newStream() is called only once per connection upgrade and the correct handler is used (#9396)
Motivation:

306299323c introduced some code change to move the responsibility of creating the stream for the upgrade to Http2FrameCodec. Unfortunaly this lead to the situation of having newStream().setStreamAndProperty(...) be called twice. Because of this we only ever saw the channelActive(...) on Http2StreamChannel but no other events as the mapping was replaced on the second newStream().setStreamAndProperty(...) call.

Beside this we also did not use the correct handler for the upgrade stream in some cases

Modifications:

- Just remove the Http2FrameCodec.onHttpClientUpgrade() method and so let the base class handle all of it. The stream is created correctly as part of the ConnectionListener implementation of Http2FrameCodec already.
- Consolidate logic of creating stream channels
- Adjust unit test to capture the bug

Result:

Fixes https://github.com/netty/netty/issues/9395
2019-07-23 21:05:39 +02:00
YuanHu
94f3930850 Recycler availableSharedCapacity will be slowly exhausted due missing reclaimSpace(...) call (#9394)
Motivation:

We did miss to call reclaimSpace(...) in one case which can lead to the situation of having the Recycler to not correctly reclaim space and so just create new objects when not needed.

Modifications:

Correctly call reclaimSpace(...)

Result:

Recycler correctly reclaims space in all situations.
2019-07-21 21:06:31 +02:00
Norman Maurer
60cf18cf20
HTTP/2 multiplex: Correctly process buffered inbound data even if autoRead is false (#9389)
Motivation:

When using the HTTP/2 multiplex implementation we need to ensure we correctly drain the buffered inbound data even if the RecvByteBufallocator.Handle tells us to stop reading in between.

Modifications:

Correctly loop through the buffered inbound data until the user does stop to request from it.

Result:

Fixes https://github.com/netty/netty/issues/9387.

Co-authored-by: Bryce Anderson <banderson@twitter.com>
2019-07-21 20:58:23 +02:00
Norman Maurer
04afa3a07e
Reuse Http2FrameStreamEvent instances to reduce GC pressure (#9392)
Motivation:

We can easily reuse the Http2FrameStreamEvent instances and so reduce GC pressure as there may be multiple events per streams over the life-time.

Modifications:

Reuse instances

Result:

Less allocations
2019-07-21 20:35:35 +02:00
Norman Maurer
924150198e
Update java versions (#9393)
Motivation:

There were new openjdk releases

Modifications:

Update releases to latest

Result:

Use latest openjdk versions on CI
2019-07-21 20:34:26 +02:00
Norman Maurer
1e8c0c59f1
Use allocator when constructing ByteBufHolder sub-types or use Unpool… (#9377)
Motivation:

In many places Netty uses Unpooled.buffer(0) while should use EMPTY_BUFFER. We can't change this due to back compatibility in the constructors but can use Unpooled.EMPTY_BUFFER in some cases to ensure we not allocate at all. In others we can directly use the allocator either from the Channel / ChannelHandlerContext or the request / response.

Modification:

- Use Unpooled.EMPTY_BUFFER where possible
- Use allocator where possible

Result:

Fixes #9345 for websockets and http package
2019-07-18 10:29:50 +02:00
Norman Maurer
84cf8f14e9
Cache the ChannelHandlerContext used in Http2StreamChannelBootstrap (#9382)
Motivation:

At the moment we lookup the ChannelHandlerContext used in Http2StreamChannelBootstrap each time the open(...) method is invoked. This is not needed and we can just cache it for later usage.

Modifications:

Cache ChannelHandlerContext in volatile field.

Result:

Speed up open(...) method implementation when called multiple times
2019-07-18 10:20:34 +02:00
Norman Maurer
26c3abc63c
Add websocket encoder / decoder in correct order to the pipeline when HttpServerCodec is used (#9386)
Motivation:

We need to ensure we place the encoder before the decoder when doing the websockets upgrade as the decoder may produce a close frame when protocol violations are detected.

Modifications:

- Correctly place encoder before decoder
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9300
2019-07-18 10:19:09 +02:00
Bryce Anderson
dd1785ba66 Fix an NPE in AbstractHttp2StreamChannel (#9379)
Motivation:

If a read triggers a AbstractHttp2StreamChannel to close we can
get an NPE in the read loop.

Modifications:

Make sure that the inboundBuffer isn't null before attempting to
continue the loop.

Result:

No NPE.
Fixes #9337
2019-07-17 20:12:19 +02:00
Norman Maurer
e8ab79f34d
Add testcase to prove that ET semantics for eventFD are correct (#9385)
Motivation:

We recently made a change to use ET for the eventfd and not trigger a read each time. This testcase proves everything works as expected.

Modifications:

Add testcase that verifies thqat the wakeups happen correctly

Result:

More tests
2019-07-17 12:23:08 +02:00
Norman Maurer
4a3dd23f2f
Update to adopt@1.8.212-04 (#9384)
Motivation:

We should use latest jdk 1.8 release

Modifications:

Update to adopt@1.8.212-04

Result:

Use latest jdk 1.8 on ci
2019-07-17 09:41:45 +02:00
Norman Maurer
e8d27560d0
Use latest OpenJDK13 EA release (#9378)
Motivation:

A new EA release for OpenJDK13 was released

Modifications:

Update EA version

Result:

Use latest OpenJDK 13 EA on ci
2019-07-17 09:29:43 +02:00
Dmitriy Dumanskiy
a82d62ae67 prefer instanceOf instead of getClass() (#9366)
Motivation:

`instanceOf` doesn't perform null check like `getClass()` does. So `instanceOf` may be faster. However, it not true for all cases, as C2 could eliminate these null checks for `getClass()`.

Modification:

Replaced `string.getClass() == AsciiString.class` with `string instanceof AsciiString`.

Proof:

```
@BenchmarkMode(Mode.Throughput)
@Fork(value = 1)
@State(Scope.Thread)
@Warmup(iterations = 5, time = 1, batchSize = 1000)
@Measurement(iterations = 10, time = 1, batchSize = 1000)
public class GetClassInstanceOf {

    Object key;

    @Setup
    public void setup() {
        key = "123";
    }

    @Benchmark
    public boolean getClassEquals() {
        return key.getClass() == String.class;
    }

    @Benchmark
    public boolean instanceOf() {
        return key instanceof String;
    }

}
```

```
Benchmark                           Mode  Cnt       Score      Error  Units
GetClassInstanceOf.getClassEquals  thrpt   10  401863.130 ± 3092.568  ops/s
GetClassInstanceOf.instanceOf      thrpt   10  421386.176 ± 4317.328  ops/s
```
2019-07-16 21:20:12 +02:00
Emily Littleworth
d07d7e2b9a Return null in HttpPostRequestEncoder (#9352)
Motivation:

If the encoded value of a form element happens to exactly hit
the chunk limit (8096 bytes), the post request encoder will
throw a NullPointerException.

Modifications:

Catch the null case and return.

Result:

No NPE.
2019-07-16 13:29:33 +02:00
Norman Maurer
306299323c
Move responsibility for creating upgrade stream to Http2FrameCodec (#9360)
Motivation:

The Http2FrameCodec should be responsible to create the upgrade stream.

Modifications:

Move code to create stream to Http2FrameCodec

Result:

More correct responsibility
2019-07-16 13:24:45 +02:00
Nick Hill
1748352d98 Fix epoll spliceTo file descriptor with offset (#9369)
Motivation

The AbstractEpollStreamChannel::spliceTo(FileDescriptor, ...) methods
take an offset parameter but this was effectively ignored due to what
looks like a typo in the corresponding JNI function impl. Instead it
would always use the file's own native offset.

Modification

- Fix typo in netty_epoll_native_splice0() and offset accounting in
AbstractEpollStreamChannel::SpliceFdTask.
- Modify unit test to include an invocation of the public spliceTo
method using non-zero offset.

Result

spliceTo FD methods work as expected when an offset is provided.
2019-07-16 13:22:30 +02:00
Dmitriy Dumanskiy
cd824e4e31 Cleanup in websockets, throw exception before allocating response if possible (#9361)
Motivation:

While fixing #9359 found few places that could be patched / improved separately.

Modification:

On handshake response generation - throw exception before allocating response objects if request is invalid.

Result:

No more leaks when exception is thrown.
2019-07-16 13:12:17 +02:00
Norman Maurer
4f172c13bb
Add deprecation to Http2StreamChannelBootstrap.open0(...) as it was marked as public by mistake (#9372)
Motivation:

Mark Http2StreamChannelBootstrap.open0(...) as deprecated as the user should not use it. It was marked as public by mistake.

Modifications:

Add deprecation warning.

Result:

User will be aware the method should not be used directly.
2019-07-16 13:08:09 +02:00
Norman Maurer
906fc02b3f
Allow to disable automatically sending PING acks. (#9338)
Motivation:

There are situations where the user may want to be more flexible when to send the PING acks for various reasons or be able to attach a listener to the future that is used for the ping ack. To be able to do so we should allow to manage the acks manually.

Modifications:

- Add constructor to DefaultHttp2ConnectionDecoder that allows to disable the automatically sending of ping acks (default is to send automatically to not break users)
- Add methods to AbstractHttp2ConnectionHandlerBuilder (and sub-classes) to either enable ot disable auto acks for pings
- Make DefaultHttp2PingFrame constructor public that allows to write acks.
- Add unit test

Result:

More flexible way of handling acks.
2019-07-12 18:15:06 +02:00
Nick Hill
5384bbcf85 Epoll: Don't wake event loop when splicing (#9354)
Motivation

I noticed this while looking at something else.
AbstractEpollStreamChannel::spliceQueue is an MPSC queue but only
accessed from the event loop. So it could be just changed to e.g. an
ArrayDeque. This PR instead reverts to using is as an MPSC queue to
avoid dispatching a task to the EL, as appears was the original
intention.

Modification

Change AbstractEpollStreamChannel::spliceQueue to be volatile and lazily
initialized via double-checked locking. Add tasks directly to the queue
from the public methods rather than possibly waking the EL just to
enqueue.

An alternative is just to change PlatformDependent.newMpscQueue() to new
ArrayDeque() and be done with it :)

Result

Less disruptive channel/fd-splicing.
2019-07-12 18:06:26 +02:00
Andrey Mizurov
be26f4e00f Fixed incorrect Sec-WebSocket-Origin header for v13, see #9134 (#9312)
Motivation:

Based on https://tools.ietf.org/html/rfc6455#section-1.3 - for non-browser
clients, Origin header field may be sent if it makes sense in the context of those clients.

Modification:

Replace Sec-WebSocket-Origin to Origin

Result:

Fixes #9134 .
2019-07-12 12:05:39 +02:00
Robin Gong
b02ee1106f feat(example-mqtt): new MQTT heartBeat broker and client examples (#9336)
Motivation:

Recently I'm going to build MQTT broker and client based on Netty. I had MQTT encoder and decoder founded, while no basic examples. So I'm going to share my simple heartBeat MQTT broker and client as an example.

Modification:

New MQTT heartBeat example under io.netty.example/mqtt/heartBeat/.

Result:

Client would send CONNECT and PINGREQ(heartBeat message).
  - CONNECT: once channel active
  - PINGREQ: once IdleStateEvent triggered, which is 20 seconds in this example
Client would discard all messages it received.
MQTT broker could handle CONNECT, PINGREQ and DISCONNECT messages.
  - CONNECT: send CONNACK back
  - PINGREQ: send PINGRESP back
  - DISCONNECT: close the channel
Broker would close the channel if 2 heartBeat lost, which set to 45 seconds in this example.
2019-07-10 12:19:15 +02:00
Farid Zakaria
7fc355aa05 Introduce SslMasterKeyHandler (#8653)
Motivation

Debugging SSL/TLS connections through wireshark is a pain -- if the cipher used involves Diffie-Hellman then it is essentially impossible unless you can have the client dump out the master key [1]

This is a work-in-progress change (tests & comments to come!) that introduces a new handler you can set on the SslContext to receive the master key & session id. I'm hoping to get feedback if a change in this vein would be welcomed.

An implementation that conforms to Wireshark's NSS key log[2] file is also included.

Depending on feedback on the PR going forward I am planning to "clean it up" by adding documentation, example server & tests. Implementation will need to be finished as well for retrieving the master key from the OpenSSL context.

[1] https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
[2] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format

Modification

- Added SslMasterKeyHandler
- An implementation of the handler that conforms to Wireshark's key log format is included.

Result:

Be able to debug SSL / TLS connections more easily.

Signed-off-by: Farid Zakaria <farid.m.zakaria@gmail.com>
2019-07-10 12:02:46 +02:00
jingene
c0f9364870 Change the netty.io homepage scheme(http -> https) (#9344)
Motivation:

Netty homepage(netty.io) serves both "http" and "https".
It's recommended to use https than http.
Modification:

I changed from "http://netty.io" to "https://netty.io"
Result:

No effects.
2019-07-09 21:09:42 +02:00
Norman Maurer
bded2a1c75
HTTP2: Always apply the graceful shutdown timeout if configured (#9340)
Motivation:

Http2ConnectionHandler (and sub-classes) allow to configure a graceful shutdown timeout but only apply it if there is at least one active stream. We should always apply the timeout. This is also true when we try to send a GO_AWAY and close the connection because of an connection error.

Modifications:

- Always apply the timeout if one is configured
- Add unit test

Result:

Always respect gracefulShutdownTimeoutMillis
2019-07-09 21:05:34 +02:00
Norman Maurer
4312e11316
DecoratingHttp2ConnectionEncoder.consumeRemoteSettings must not throw if delegate is instance of Http2SettingsReceivedConsumer (#9343)
Motivation:

b3dba317d7 introduced the concept of Http2SettingsReceivedConsumer but did not correctly inplement DecoratingHttp2ConnectionEncoder.consumeRemoteSettings(...).

Modifications:

- Add missing `else` around the throws
- Add unit tests

Result:

Correctly implement DecoratingHttp2ConnectionEncoder.consumeRemoteSettings(...)
2019-07-09 14:39:32 +02:00
Nick Hill
91d6e0ea8f Simplify HpackHuffmanDecoder table decode logic (#9335)
Motivation

The nice change made by @carl-mastrangelo in #9307 for lookup-table
based HPACK Huffman decoding can be simplified a little to remove the
separate flags field and eliminate some intermediate operations.

Modification

Simplify HpackHuffmanDecoder::decode logic including de-dup of the
per-nibble part.

Result

Less code, possibly better performance though not noticeable in a quick
benchmark.
2019-07-08 12:04:20 +02:00
Norman Maurer
db8dd66f09
Reduce object creation on Http2FrameCodec (#9333)
Motivation:

We don't need the extra ChannelPromise when writing headers anymore in Http2FrameCodec. This also means we cal re-use a ChannelFutureListener and so not need to create new instances all the time.

Modifications:

- Just pass the original ChannelPromise when writing headers
- Reuse the ChannelFutureListener

Result:

Two less objects created when writing headers for an not-yet created stream.
2019-07-06 09:08:20 +02:00
Norman Maurer
6da809dc11
Increase maxHeaderListSize for HpackDecoderBenchmark to be able to be… (#9321)
Motivation:

The previous used maxHeaderListSize was too low which resulted in exceptions during the benchmark run:

```
io.netty.handler.codec.http2.Http2Exception: Header size exceeded max allowed size (8192)
	at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:103)
	at io.netty.handler.codec.http2.Http2Exception.headerListSizeError(Http2Exception.java:188)
	at io.netty.handler.codec.http2.Http2CodecUtil.headerListSizeExceeded(Http2CodecUtil.java:231)
	at io.netty.handler.codec.http2.HpackDecoder$Http2HeadersSink.finish(HpackDecoder.java:545)
	at io.netty.handler.codec.http2.HpackDecoder.decode(HpackDecoder.java:132)
	at io.netty.handler.codec.http2.HpackDecoderBenchmark.decode(HpackDecoderBenchmark.java:85)
	at io.netty.handler.codec.http2.generated.HpackDecoderBenchmark_decode_jmhTest.decode_thrpt_jmhStub(HpackDecoderBenchmark_decode_jmhTest.java:120)
	at io.netty.handler.codec.http2.generated.HpackDecoderBenchmark_decode_jmhTest.decode_Throughput(HpackDecoderBenchmark_decode_jmhTest.java:83)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)

```

Also we should ensure we only use ascii for header names.

Modifications:

Just use Integer.MAX_VALUE as limit

Result:

Be able to run benchmark without exceptions
2019-07-04 11:24:13 +02:00
jimin
a0656d2a31 Remove unnecessary code (#9303)
Motivation:

There are is some unnecessary code (like toString() calls) which can be cleaned up.

Modifications:

- Remove not needed toString() calls
- Simplify subString(...) calls
- Remove some explicit casts when not needed.

Result:

Cleaner code
2019-07-04 08:51:47 +02:00
Norman Maurer
707c95e80d
Use ByteProcessor in HpackHuffmanDecoder to reduce bound-checks and r… (#9317)
Motivation:

ff0045e3e1 changed HpackHuffmanDecoder to use a lookup-table which greatly improved performance. We can squeeze out another 3% win by using an ByteProcessor which will reduce the number of bound-checks / reference-count-checks needed by processing byte-by-byte.

Modifications:

Implement logic with ByteProcessor

Result:

Another ~3% perf improvement which shows up when using h2load to simulate load.

`h2load -c 100 -m 100 --duration 60 --warm-up-time 10 http://127.0.0.1:8080`

Before:

```
finished in 70.02s, 620051.67 req/s, 20.70MB/s
requests: 37203100 total, 37203100 started, 37203100 done, 37203100 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 37203100 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.21GB (1302108500) total, 41.84MB (43872600) headers (space savings 90.00%), 460.24MB (482598600) data
                     min         max         mean         sd        +/- sd
time for request:      404us     24.52ms     15.93ms      1.45ms    87.90%
time for connect:        0us         0us         0us         0us     0.00%
time to 1st byte:        0us         0us         0us         0us     0.00%
req/s           :    6186.64     6211.60     6199.00        5.18    65.00%
```

With this change:

```
finished in 70.02s, 642103.33 req/s, 21.43MB/s
requests: 38526200 total, 38526200 started, 38526200 done, 38526200 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 38526200 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.26GB (1348417000) total, 42.39MB (44444900) headers (space savings 90.00%), 466.25MB (488893900) data
                     min         max         mean         sd        +/- sd
time for request:      370us     24.89ms     15.52ms      1.35ms    88.02%
time for connect:        0us         0us         0us         0us     0.00%
time to 1st byte:        0us         0us         0us         0us     0.00%
req/s           :    6407.06     6435.19     6419.74        5.62    67.00%
```
2019-07-04 08:46:08 +02:00
Norman Maurer
16b98d370f
Correctly handle http2 upgrades when Http2FrameCodec is used together… (#9318)
Motivation:

In the latest release we introduced Http2MultiplexHandler as a replacement of Http2MultiplexCodec. This did split the frame parsing from the multiplexing to allow a more flexible way to handle frames and to make the code cleaner. Unfortunally we did miss to special handle this in Http2ServerUpgradeCodec and so did not correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...). This did lead to the situation that we did not correctly receive the event on the Http2MultiplexHandler and so did not correctly created the Http2StreamChannel for the upgrade stream. Because of this we ended up with an NPE if a frame was dispatched to the upgrade stream later on.

Modifications:

- Correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...)
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9314.
2019-07-04 08:32:41 +02:00
Norman Maurer
1b82474286
Fix NPE caused by re-entrance calls in FlowControlHandler (#9320)
Motivation:

2c99fc0f12 introduced a change that eagly recycles the queue. Unfortunally it did not correct protect against re-entrance which can cause a NPE.

Modifications:

- Correctly protect against re-entrance by adding null checks
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9319.
2019-07-03 19:55:18 +02:00
Carl Mastrangelo
ff0045e3e1 Use Table lookup for HPACK decoder (#9307)
Motivation:
Table based decoding is fast.

Modification:
Use table based decoding in HPACK decoder, inspired by
https://github.com/python-hyper/hpack/blob/master/hpack/huffman_table.py

This modifies the table to be based on integers, rather than 3-tuples of
bytes.  This is for two reasons:

1.  It's faster
2.  Using bytes makes the static intializer too big, and doesn't
compile.

Result:
Faster Huffman decoding.  This only seems to help the ascii case, the
other decoding is about the same.

Benchmarks:

```
Before:
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score       Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   426293.636 ±  1444.843  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    57843.738 ±   725.704  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3002.412 ±    16.998  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   412339.400 ±  1128.394  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    58226.870 ±   199.591  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3044.256 ±    10.675  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2082615.030 ±  5929.726  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   10   571640.454 ± 26499.229  ops/s
HpackDecoderBenchmark.decode           false         true   LARGE  thrpt   20    92714.555 ±  2292.222  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1745872.421 ±  6788.840  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   20   490420.323 ±  2455.431  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   20    84536.200 ±   398.714  ops/s

After(bytes):
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score      Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   472649.148 ± 7122.461  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    66739.638 ±  341.607  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3139.773 ±   24.491  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   466933.833 ± 4514.971  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    66111.778 ±  568.326  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3143.619 ±    3.332  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2109995.177 ± 6203.143  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   20   586026.055 ± 1578.550  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1775723.270 ± 4932.057  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   20   493316.467 ± 1453.037  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   10    85726.219 ±  402.573  ops/s

After(ints):
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score       Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   615549.006 ±  5282.283  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    86714.630 ±   654.489  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3984.439 ±    61.612  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   602489.337 ±  5397.024  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    88399.109 ±   241.115  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3875.729 ±   103.057  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2092165.454 ± 11918.859  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   20   583465.437 ±  5452.115  ops/s
HpackDecoderBenchmark.decode           false         true   LARGE  thrpt   20    93290.061 ±   665.904  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1758402.495 ± 14677.438  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   10   491598.099 ±  5029.698  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   20    85834.290 ±   554.915  ops/s
```
2019-07-02 20:09:44 +02:00
秦世成
18e4121952 Pre-decompressed DNS record RData that may contain compression pointers (#9311)
Motivation:

When decoding DnsRecord, if the record contains compression pointers, and not all compression pointers are decompressed, but part of the pointers are decompressed. Then when encoding the record, the compressed pointer will point to the wrong location, resulting in bad label problem.

Modification:

Pre-decompressed record RData that may contain compression pointers.

Result:

Fixes #8962
2019-07-02 19:38:50 +02:00
Carl Mastrangelo
deea51e609 Disable Huffman encoding for small headers (#9260)
Motivation:

Huffman coding saves only a little space, but has a huge CPU cost

Modification:

Disable huff coding for headers smaller than 512 bytes.  Also, add a
configurable limit to the encoder.

Result:

Faster HPACK

BEFORE:
```
Benchmark                     (duplicates)  (limitToAscii)  (sensitive)  (size)  Mode  Cnt       Score       Error  Units
HpackEncoderBenchmark.encode          true            true         true   SMALL  avgt   10    2572.595 ±    16.184  ns/op
HpackEncoderBenchmark.encode          true            true         true  MEDIUM  avgt   10   19580.815 ±   397.780  ns/op
HpackEncoderBenchmark.encode          true            true         true   LARGE  avgt   10  379456.381 ±  2059.919  ns/op
HpackEncoderBenchmark.encode          true            true        false   SMALL  avgt   10     730.579 ±     8.116  ns/op
HpackEncoderBenchmark.encode          true            true        false  MEDIUM  avgt   10    2087.590 ±    84.644  ns/op
HpackEncoderBenchmark.encode          true            true        false   LARGE  avgt   10   11725.228 ±    89.298  ns/op
HpackEncoderBenchmark.encode          true           false         true   SMALL  avgt   10     555.971 ±     5.120  ns/op
HpackEncoderBenchmark.encode          true           false         true  MEDIUM  avgt   10    2831.874 ±    41.801  ns/op
HpackEncoderBenchmark.encode          true           false         true   LARGE  avgt   10   36054.025 ±   179.504  ns/op
HpackEncoderBenchmark.encode          true           false        false   SMALL  avgt   10     340.337 ±     3.313  ns/op
HpackEncoderBenchmark.encode          true           false        false  MEDIUM  avgt   10    1006.817 ±     8.942  ns/op
HpackEncoderBenchmark.encode          true           false        false   LARGE  avgt   10    8784.168 ±   164.014  ns/op
HpackEncoderBenchmark.encode         false            true         true   SMALL  avgt   10    2561.934 ±    27.056  ns/op
HpackEncoderBenchmark.encode         false            true         true  MEDIUM  avgt   10   22061.105 ±   154.533  ns/op
HpackEncoderBenchmark.encode         false            true         true   LARGE  avgt   10  435744.897 ±  8853.388  ns/op
HpackEncoderBenchmark.encode         false            true        false   SMALL  avgt   10    2737.683 ±    47.142  ns/op
HpackEncoderBenchmark.encode         false            true        false  MEDIUM  avgt   10   22385.146 ±    98.430  ns/op
HpackEncoderBenchmark.encode         false            true        false   LARGE  avgt   10  408159.698 ± 12044.931  ns/op
HpackEncoderBenchmark.encode         false           false         true   SMALL  avgt   10     544.213 ±     3.279  ns/op
HpackEncoderBenchmark.encode         false           false         true  MEDIUM  avgt   10    2908.978 ±    31.026  ns/op
HpackEncoderBenchmark.encode         false           false         true   LARGE  avgt   10   36471.262 ±  1044.010  ns/op
HpackEncoderBenchmark.encode         false           false        false   SMALL  avgt   10     609.305 ±     4.371  ns/op
HpackEncoderBenchmark.encode         false           false        false  MEDIUM  avgt   10    3223.946 ±    23.505  ns/op
HpackEncoderBenchmark.encode         false           false        false   LARGE  avgt   10   39975.152 ±   655.196  ns/op
```

AFTER:
```
NEW AFTER

Benchmark                     (duplicates)  (limitToAscii)  (sensitive)  (size)  Mode  Cnt     Score     Error  Units
HpackEncoderBenchmark.encode          true            true         true   SMALL  avgt    5   379.473 ± 133.815  ns/op
HpackEncoderBenchmark.encode          true            true         true  MEDIUM  avgt    5  1118.772 ±  89.258  ns/op
HpackEncoderBenchmark.encode          true            true         true   LARGE  avgt    5  5366.828 ±  89.746  ns/op
HpackEncoderBenchmark.encode          true            true        false   SMALL  avgt    5   284.401 ±   2.088  ns/op
HpackEncoderBenchmark.encode          true            true        false  MEDIUM  avgt    5   922.805 ±  10.796  ns/op
HpackEncoderBenchmark.encode          true            true        false   LARGE  avgt    5  8727.831 ± 462.138  ns/op
HpackEncoderBenchmark.encode          true           false         true   SMALL  avgt    5   337.093 ±  22.585  ns/op
HpackEncoderBenchmark.encode          true           false         true  MEDIUM  avgt    5   693.689 ±  16.351  ns/op
HpackEncoderBenchmark.encode          true           false         true   LARGE  avgt    5  5616.786 ±  98.647  ns/op
HpackEncoderBenchmark.encode          true           false        false   SMALL  avgt    5   286.708 ±  13.765  ns/op
HpackEncoderBenchmark.encode          true           false        false  MEDIUM  avgt    5   906.279 ±  32.338  ns/op
HpackEncoderBenchmark.encode          true           false        false   LARGE  avgt    5  8304.736 ± 128.584  ns/op
HpackEncoderBenchmark.encode         false            true         true   SMALL  avgt    5   351.381 ±  15.547  ns/op
HpackEncoderBenchmark.encode         false            true         true  MEDIUM  avgt    5  1188.166 ±   7.023  ns/op
HpackEncoderBenchmark.encode         false            true         true   LARGE  avgt    5  6876.009 ±  48.117  ns/op
HpackEncoderBenchmark.encode         false            true        false   SMALL  avgt    5   434.759 ±   8.619  ns/op
HpackEncoderBenchmark.encode         false            true        false  MEDIUM  avgt    5   954.588 ±  58.514  ns/op
HpackEncoderBenchmark.encode         false            true        false   LARGE  avgt    5  8534.017 ± 552.597  ns/op
HpackEncoderBenchmark.encode         false           false         true   SMALL  avgt    5   223.713 ±   4.823  ns/op
HpackEncoderBenchmark.encode         false           false         true  MEDIUM  avgt    5  1181.538 ±  11.851  ns/op
HpackEncoderBenchmark.encode         false           false         true   LARGE  avgt    5  6670.830 ± 267.927  ns/op
HpackEncoderBenchmark.encode         false           false        false   SMALL  avgt    5   424.609 ±  27.477  ns/op
HpackEncoderBenchmark.encode         false           false        false  MEDIUM  avgt    5  1003.578 ±  53.991  ns/op
HpackEncoderBenchmark.encode         false           false        false   LARGE  avgt    5  8428.932 ± 102.838  ns/op
```
2019-07-01 21:00:20 +02:00
Norman Maurer
131be58f48
Correctly take length of ByteBufInputStream into account for readLine… (#9310)
* Correctly take length of ByteBufInputStream into account for readLine() / readByte()

Motivation:

ByteBufInputStream did not correctly take the length into account when validate bounds for readLine() / readByte() which could lead to read more then allowed.

Modifications:

- Correctly take length into account
- Add unit tests
- Fix existing unit test

Result:

Correctly take length of ByteBufInputStream into account.
Related to https://github.com/netty/netty/pull/9306.
2019-07-01 20:55:23 +02:00
Dmitriy Dumanskiy
5ded050f7b #7285 Improved "Discarded inbound message" warning (#9286)
Motivation:

On servers with many pipelines or dynamic pipelines, it is easy for end user to make mistake during pipeline configuration. Current message:

`Discarded inbound message PooledUnsafeDirectByteBuf(ridx: 0, widx: 2, cap: 2) that reached at the tail of the pipeline. Please check your pipeline configuration.`

Is not always meaningful and doesn't allow to find the wrong pipeline quickly.

Modification:

Added additional log placeholder that identifies pipeline handlers and channel info. This will allow for the end users quickly find the problem pipeline.

Result:

Meaningful warning when the message reaches the end of the pipeline. Fixes #7285
2019-07-01 20:38:58 +02:00
xiaoheng1
f8c1f350db Fix public int read() throws IOException method exceeds the limit of length (#9306)
Motivation:

buffer.isReadable() should not be used to limit the amount of data that can be read as the amount may be less then was is readable.

Modification:

- Use  available() which takes the length into account
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9305
2019-07-01 15:57:34 +02:00
Cory Benfield
14154074f2 Don't loop over TLS records for SNI (#7479)
Motivation:

The AbstractSniHandler previously was willing to tolerate up to three
non-handshake records before a ClientHello that contained an SNI
extension field. This is, so far as I can tell, completely
unnecessary: no TLS implementation will be sending alerts or change
cipher spec messages before ClientHello.

Given that it was not possible to determine why this loop is in
the code to begin with, it's probably just best to remove it.

Modifications:

Remove the for loop.

Result:

The AbstractSniHandler will more rapidly determine whether it should
pass the records on to the default SSL handler.

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
2019-07-01 11:22:55 +02:00
秦世成
4596f9e139 Fix the issue of incorrectly calculating the number of dump lines when using PrettyDump in ByteBufUtil (#9304)
Motivation:

Fix the issue of incorrectly calculating the number of dump rows when using prettyHexDumpmethod in ByteBufUtil. The way to find the remainder is either length % 16 or length & 15

Modification:

Fixed the way to calculate the remainder

Result:

Fixed #9301
2019-07-01 08:35:18 +02:00
Carl Mastrangelo
262ced7ce4 Unconditionally initialize sockaddrs in epoll linuxsocket (#9299)
Motivation:

Compiling with -Werror,-Wuninitialized complains about the sockaddrs being uninitialized.
I believe this is because the init function netty_unix_socket_initSockaddr is in a
separate compilation unit.  Since this code isn't on the criticial path, it's easy
to just memset the variables rather than suppress the warning.

Modification:
Always clear the sockaddrs, even if they will be initialized later.

Result:
Able to compile with warnings turned on
2019-06-29 12:16:58 +02:00
Norman Maurer
05c2967e4a
Http2FrameCodecBuilder.autoAckSettingsFrame(...) must be public (#9295)
Motivation:

b3dba317d7 added AbstractHttp2ConnectionBuilder.autoAckSettingsFrame(...) as protected method and made it public for Http2MultiplexCodecBuilder. Unfortunally it did miss to also make it public in Http2FrameCodecBuilder

Modifications:

Correctly override autoAckSettingsFrame in Http2FrameCodecBuilder and so make it usable when building Http2FrameCodec.

Result:

Be able to also configure autoAckSettingsFrame when Http2FrameCodec is used.
2019-06-29 09:23:38 +02:00