Compare commits

...

371 Commits

Author SHA1 Message Date
Norman Maurer d08e546c0f WIP 2019-07-26 15:21:01 +02:00
Norman Maurer 780b04ad44 Scotts comment 2019-07-26 13:31:14 +02:00
shorea fb6c8c658b Sending RST_STREAM when PUSH_PROMISE for a canceled stream arrives 2019-07-26 13:29:23 +02:00
shorea cd9761b2fb Issue #8025. Ignoring HEADER and DATA frames for streams that may have existed in the past. 2019-07-26 13:29:16 +02:00
root 718b7626e6 [maven-release-plugin] prepare for next development iteration 2019-07-24 09:05:57 +00:00
root 465c900c04 [maven-release-plugin] prepare release netty-4.1.38.Final 2019-07-24 09:05:23 +00:00
Per Lundberg aa032b8aea Future.java: Fix typos in Javadoc (#9391)
Motivation:

Docs should have no typos

Modifications:

Fix a few typos

Result:

More correct docs.
2019-07-24 07:23:29 +02:00
Norman Maurer 513e9f2893
HTTP/2: Ensure newStream() is called only once per connection upgrade and the correct handler is used (#9396)
Motivation:

306299323c introduced some code change to move the responsibility of creating the stream for the upgrade to Http2FrameCodec. Unfortunaly this lead to the situation of having newStream().setStreamAndProperty(...) be called twice. Because of this we only ever saw the channelActive(...) on Http2StreamChannel but no other events as the mapping was replaced on the second newStream().setStreamAndProperty(...) call.

Beside this we also did not use the correct handler for the upgrade stream in some cases

Modifications:

- Just remove the Http2FrameCodec.onHttpClientUpgrade() method and so let the base class handle all of it. The stream is created correctly as part of the ConnectionListener implementation of Http2FrameCodec already.
- Consolidate logic of creating stream channels
- Adjust unit test to capture the bug

Result:

Fixes https://github.com/netty/netty/issues/9395
2019-07-23 21:05:39 +02:00
YuanHu 94f3930850 Recycler availableSharedCapacity will be slowly exhausted due missing reclaimSpace(...) call (#9394)
Motivation:

We did miss to call reclaimSpace(...) in one case which can lead to the situation of having the Recycler to not correctly reclaim space and so just create new objects when not needed.

Modifications:

Correctly call reclaimSpace(...)

Result:

Recycler correctly reclaims space in all situations.
2019-07-21 21:06:31 +02:00
Norman Maurer 60cf18cf20
HTTP/2 multiplex: Correctly process buffered inbound data even if autoRead is false (#9389)
Motivation:

When using the HTTP/2 multiplex implementation we need to ensure we correctly drain the buffered inbound data even if the RecvByteBufallocator.Handle tells us to stop reading in between.

Modifications:

Correctly loop through the buffered inbound data until the user does stop to request from it.

Result:

Fixes https://github.com/netty/netty/issues/9387.

Co-authored-by: Bryce Anderson <banderson@twitter.com>
2019-07-21 20:58:23 +02:00
Norman Maurer 04afa3a07e
Reuse Http2FrameStreamEvent instances to reduce GC pressure (#9392)
Motivation:

We can easily reuse the Http2FrameStreamEvent instances and so reduce GC pressure as there may be multiple events per streams over the life-time.

Modifications:

Reuse instances

Result:

Less allocations
2019-07-21 20:35:35 +02:00
Norman Maurer 924150198e
Update java versions (#9393)
Motivation:

There were new openjdk releases

Modifications:

Update releases to latest

Result:

Use latest openjdk versions on CI
2019-07-21 20:34:26 +02:00
Norman Maurer 1e8c0c59f1
Use allocator when constructing ByteBufHolder sub-types or use Unpool… (#9377)
Motivation:

In many places Netty uses Unpooled.buffer(0) while should use EMPTY_BUFFER. We can't change this due to back compatibility in the constructors but can use Unpooled.EMPTY_BUFFER in some cases to ensure we not allocate at all. In others we can directly use the allocator either from the Channel / ChannelHandlerContext or the request / response.

Modification:

- Use Unpooled.EMPTY_BUFFER where possible
- Use allocator where possible

Result:

Fixes #9345 for websockets and http package
2019-07-18 10:29:50 +02:00
Norman Maurer 84cf8f14e9
Cache the ChannelHandlerContext used in Http2StreamChannelBootstrap (#9382)
Motivation:

At the moment we lookup the ChannelHandlerContext used in Http2StreamChannelBootstrap each time the open(...) method is invoked. This is not needed and we can just cache it for later usage.

Modifications:

Cache ChannelHandlerContext in volatile field.

Result:

Speed up open(...) method implementation when called multiple times
2019-07-18 10:20:34 +02:00
Norman Maurer 26c3abc63c
Add websocket encoder / decoder in correct order to the pipeline when HttpServerCodec is used (#9386)
Motivation:

We need to ensure we place the encoder before the decoder when doing the websockets upgrade as the decoder may produce a close frame when protocol violations are detected.

Modifications:

- Correctly place encoder before decoder
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9300
2019-07-18 10:19:09 +02:00
Bryce Anderson dd1785ba66 Fix an NPE in AbstractHttp2StreamChannel (#9379)
Motivation:

If a read triggers a AbstractHttp2StreamChannel to close we can
get an NPE in the read loop.

Modifications:

Make sure that the inboundBuffer isn't null before attempting to
continue the loop.

Result:

No NPE.
Fixes #9337
2019-07-17 20:12:19 +02:00
Norman Maurer e8ab79f34d
Add testcase to prove that ET semantics for eventFD are correct (#9385)
Motivation:

We recently made a change to use ET for the eventfd and not trigger a read each time. This testcase proves everything works as expected.

Modifications:

Add testcase that verifies thqat the wakeups happen correctly

Result:

More tests
2019-07-17 12:23:08 +02:00
Norman Maurer 4a3dd23f2f
Update to adopt@1.8.212-04 (#9384)
Motivation:

We should use latest jdk 1.8 release

Modifications:

Update to adopt@1.8.212-04

Result:

Use latest jdk 1.8 on ci
2019-07-17 09:41:45 +02:00
Norman Maurer e8d27560d0
Use latest OpenJDK13 EA release (#9378)
Motivation:

A new EA release for OpenJDK13 was released

Modifications:

Update EA version

Result:

Use latest OpenJDK 13 EA on ci
2019-07-17 09:29:43 +02:00
Dmitriy Dumanskiy a82d62ae67 prefer instanceOf instead of getClass() (#9366)
Motivation:

`instanceOf` doesn't perform null check like `getClass()` does. So `instanceOf` may be faster. However, it not true for all cases, as C2 could eliminate these null checks for `getClass()`.

Modification:

Replaced `string.getClass() == AsciiString.class` with `string instanceof AsciiString`.

Proof:

```
@BenchmarkMode(Mode.Throughput)
@Fork(value = 1)
@State(Scope.Thread)
@Warmup(iterations = 5, time = 1, batchSize = 1000)
@Measurement(iterations = 10, time = 1, batchSize = 1000)
public class GetClassInstanceOf {

    Object key;

    @Setup
    public void setup() {
        key = "123";
    }

    @Benchmark
    public boolean getClassEquals() {
        return key.getClass() == String.class;
    }

    @Benchmark
    public boolean instanceOf() {
        return key instanceof String;
    }

}
```

```
Benchmark                           Mode  Cnt       Score      Error  Units
GetClassInstanceOf.getClassEquals  thrpt   10  401863.130 ± 3092.568  ops/s
GetClassInstanceOf.instanceOf      thrpt   10  421386.176 ± 4317.328  ops/s
```
2019-07-16 21:20:12 +02:00
Emily Littleworth d07d7e2b9a Return null in HttpPostRequestEncoder (#9352)
Motivation:

If the encoded value of a form element happens to exactly hit
the chunk limit (8096 bytes), the post request encoder will
throw a NullPointerException.

Modifications:

Catch the null case and return.

Result:

No NPE.
2019-07-16 13:29:33 +02:00
Norman Maurer 306299323c
Move responsibility for creating upgrade stream to Http2FrameCodec (#9360)
Motivation:

The Http2FrameCodec should be responsible to create the upgrade stream.

Modifications:

Move code to create stream to Http2FrameCodec

Result:

More correct responsibility
2019-07-16 13:24:45 +02:00
Nick Hill 1748352d98 Fix epoll spliceTo file descriptor with offset (#9369)
Motivation

The AbstractEpollStreamChannel::spliceTo(FileDescriptor, ...) methods
take an offset parameter but this was effectively ignored due to what
looks like a typo in the corresponding JNI function impl. Instead it
would always use the file's own native offset.

Modification

- Fix typo in netty_epoll_native_splice0() and offset accounting in
AbstractEpollStreamChannel::SpliceFdTask.
- Modify unit test to include an invocation of the public spliceTo
method using non-zero offset.

Result

spliceTo FD methods work as expected when an offset is provided.
2019-07-16 13:22:30 +02:00
Dmitriy Dumanskiy cd824e4e31 Cleanup in websockets, throw exception before allocating response if possible (#9361)
Motivation:

While fixing #9359 found few places that could be patched / improved separately.

Modification:

On handshake response generation - throw exception before allocating response objects if request is invalid.

Result:

No more leaks when exception is thrown.
2019-07-16 13:12:17 +02:00
Norman Maurer 4f172c13bb
Add deprecation to Http2StreamChannelBootstrap.open0(...) as it was marked as public by mistake (#9372)
Motivation:

Mark Http2StreamChannelBootstrap.open0(...) as deprecated as the user should not use it. It was marked as public by mistake.

Modifications:

Add deprecation warning.

Result:

User will be aware the method should not be used directly.
2019-07-16 13:08:09 +02:00
Norman Maurer 906fc02b3f
Allow to disable automatically sending PING acks. (#9338)
Motivation:

There are situations where the user may want to be more flexible when to send the PING acks for various reasons or be able to attach a listener to the future that is used for the ping ack. To be able to do so we should allow to manage the acks manually.

Modifications:

- Add constructor to DefaultHttp2ConnectionDecoder that allows to disable the automatically sending of ping acks (default is to send automatically to not break users)
- Add methods to AbstractHttp2ConnectionHandlerBuilder (and sub-classes) to either enable ot disable auto acks for pings
- Make DefaultHttp2PingFrame constructor public that allows to write acks.
- Add unit test

Result:

More flexible way of handling acks.
2019-07-12 18:15:06 +02:00
Nick Hill 5384bbcf85 Epoll: Don't wake event loop when splicing (#9354)
Motivation

I noticed this while looking at something else.
AbstractEpollStreamChannel::spliceQueue is an MPSC queue but only
accessed from the event loop. So it could be just changed to e.g. an
ArrayDeque. This PR instead reverts to using is as an MPSC queue to
avoid dispatching a task to the EL, as appears was the original
intention.

Modification

Change AbstractEpollStreamChannel::spliceQueue to be volatile and lazily
initialized via double-checked locking. Add tasks directly to the queue
from the public methods rather than possibly waking the EL just to
enqueue.

An alternative is just to change PlatformDependent.newMpscQueue() to new
ArrayDeque() and be done with it :)

Result

Less disruptive channel/fd-splicing.
2019-07-12 18:06:26 +02:00
Andrey Mizurov be26f4e00f Fixed incorrect Sec-WebSocket-Origin header for v13, see #9134 (#9312)
Motivation:

Based on https://tools.ietf.org/html/rfc6455#section-1.3 - for non-browser
clients, Origin header field may be sent if it makes sense in the context of those clients.

Modification:

Replace Sec-WebSocket-Origin to Origin

Result:

Fixes #9134 .
2019-07-12 12:05:39 +02:00
Robin Gong b02ee1106f feat(example-mqtt): new MQTT heartBeat broker and client examples (#9336)
Motivation:

Recently I'm going to build MQTT broker and client based on Netty. I had MQTT encoder and decoder founded, while no basic examples. So I'm going to share my simple heartBeat MQTT broker and client as an example.

Modification:

New MQTT heartBeat example under io.netty.example/mqtt/heartBeat/.

Result:

Client would send CONNECT and PINGREQ(heartBeat message).
  - CONNECT: once channel active
  - PINGREQ: once IdleStateEvent triggered, which is 20 seconds in this example
Client would discard all messages it received.
MQTT broker could handle CONNECT, PINGREQ and DISCONNECT messages.
  - CONNECT: send CONNACK back
  - PINGREQ: send PINGRESP back
  - DISCONNECT: close the channel
Broker would close the channel if 2 heartBeat lost, which set to 45 seconds in this example.
2019-07-10 12:19:15 +02:00
Farid Zakaria 7fc355aa05 Introduce SslMasterKeyHandler (#8653)
Motivation

Debugging SSL/TLS connections through wireshark is a pain -- if the cipher used involves Diffie-Hellman then it is essentially impossible unless you can have the client dump out the master key [1]

This is a work-in-progress change (tests & comments to come!) that introduces a new handler you can set on the SslContext to receive the master key & session id. I'm hoping to get feedback if a change in this vein would be welcomed.

An implementation that conforms to Wireshark's NSS key log[2] file is also included.

Depending on feedback on the PR going forward I am planning to "clean it up" by adding documentation, example server & tests. Implementation will need to be finished as well for retrieving the master key from the OpenSSL context.

[1] https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
[2] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Key_Log_Format

Modification

- Added SslMasterKeyHandler
- An implementation of the handler that conforms to Wireshark's key log format is included.

Result:

Be able to debug SSL / TLS connections more easily.

Signed-off-by: Farid Zakaria <farid.m.zakaria@gmail.com>
2019-07-10 12:02:46 +02:00
jingene c0f9364870 Change the netty.io homepage scheme(http -> https) (#9344)
Motivation:

Netty homepage(netty.io) serves both "http" and "https".
It's recommended to use https than http.
Modification:

I changed from "http://netty.io" to "https://netty.io"
Result:

No effects.
2019-07-09 21:09:42 +02:00
Norman Maurer bded2a1c75
HTTP2: Always apply the graceful shutdown timeout if configured (#9340)
Motivation:

Http2ConnectionHandler (and sub-classes) allow to configure a graceful shutdown timeout but only apply it if there is at least one active stream. We should always apply the timeout. This is also true when we try to send a GO_AWAY and close the connection because of an connection error.

Modifications:

- Always apply the timeout if one is configured
- Add unit test

Result:

Always respect gracefulShutdownTimeoutMillis
2019-07-09 21:05:34 +02:00
Norman Maurer 4312e11316
DecoratingHttp2ConnectionEncoder.consumeRemoteSettings must not throw if delegate is instance of Http2SettingsReceivedConsumer (#9343)
Motivation:

b3dba317d7 introduced the concept of Http2SettingsReceivedConsumer but did not correctly inplement DecoratingHttp2ConnectionEncoder.consumeRemoteSettings(...).

Modifications:

- Add missing `else` around the throws
- Add unit tests

Result:

Correctly implement DecoratingHttp2ConnectionEncoder.consumeRemoteSettings(...)
2019-07-09 14:39:32 +02:00
Nick Hill 91d6e0ea8f Simplify HpackHuffmanDecoder table decode logic (#9335)
Motivation

The nice change made by @carl-mastrangelo in #9307 for lookup-table
based HPACK Huffman decoding can be simplified a little to remove the
separate flags field and eliminate some intermediate operations.

Modification

Simplify HpackHuffmanDecoder::decode logic including de-dup of the
per-nibble part.

Result

Less code, possibly better performance though not noticeable in a quick
benchmark.
2019-07-08 12:04:20 +02:00
Norman Maurer db8dd66f09
Reduce object creation on Http2FrameCodec (#9333)
Motivation:

We don't need the extra ChannelPromise when writing headers anymore in Http2FrameCodec. This also means we cal re-use a ChannelFutureListener and so not need to create new instances all the time.

Modifications:

- Just pass the original ChannelPromise when writing headers
- Reuse the ChannelFutureListener

Result:

Two less objects created when writing headers for an not-yet created stream.
2019-07-06 09:08:20 +02:00
Norman Maurer 6da809dc11
Increase maxHeaderListSize for HpackDecoderBenchmark to be able to be… (#9321)
Motivation:

The previous used maxHeaderListSize was too low which resulted in exceptions during the benchmark run:

```
io.netty.handler.codec.http2.Http2Exception: Header size exceeded max allowed size (8192)
	at io.netty.handler.codec.http2.Http2Exception.connectionError(Http2Exception.java:103)
	at io.netty.handler.codec.http2.Http2Exception.headerListSizeError(Http2Exception.java:188)
	at io.netty.handler.codec.http2.Http2CodecUtil.headerListSizeExceeded(Http2CodecUtil.java:231)
	at io.netty.handler.codec.http2.HpackDecoder$Http2HeadersSink.finish(HpackDecoder.java:545)
	at io.netty.handler.codec.http2.HpackDecoder.decode(HpackDecoder.java:132)
	at io.netty.handler.codec.http2.HpackDecoderBenchmark.decode(HpackDecoderBenchmark.java:85)
	at io.netty.handler.codec.http2.generated.HpackDecoderBenchmark_decode_jmhTest.decode_thrpt_jmhStub(HpackDecoderBenchmark_decode_jmhTest.java:120)
	at io.netty.handler.codec.http2.generated.HpackDecoderBenchmark_decode_jmhTest.decode_Throughput(HpackDecoderBenchmark_decode_jmhTest.java:83)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:453)
	at org.openjdk.jmh.runner.BenchmarkHandler$BenchmarkTask.call(BenchmarkHandler.java:437)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)

```

Also we should ensure we only use ascii for header names.

Modifications:

Just use Integer.MAX_VALUE as limit

Result:

Be able to run benchmark without exceptions
2019-07-04 11:24:13 +02:00
jimin a0656d2a31 Remove unnecessary code (#9303)
Motivation:

There are is some unnecessary code (like toString() calls) which can be cleaned up.

Modifications:

- Remove not needed toString() calls
- Simplify subString(...) calls
- Remove some explicit casts when not needed.

Result:

Cleaner code
2019-07-04 08:51:47 +02:00
Norman Maurer 707c95e80d
Use ByteProcessor in HpackHuffmanDecoder to reduce bound-checks and r… (#9317)
Motivation:

ff0045e3e1 changed HpackHuffmanDecoder to use a lookup-table which greatly improved performance. We can squeeze out another 3% win by using an ByteProcessor which will reduce the number of bound-checks / reference-count-checks needed by processing byte-by-byte.

Modifications:

Implement logic with ByteProcessor

Result:

Another ~3% perf improvement which shows up when using h2load to simulate load.

`h2load -c 100 -m 100 --duration 60 --warm-up-time 10 http://127.0.0.1:8080`

Before:

```
finished in 70.02s, 620051.67 req/s, 20.70MB/s
requests: 37203100 total, 37203100 started, 37203100 done, 37203100 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 37203100 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.21GB (1302108500) total, 41.84MB (43872600) headers (space savings 90.00%), 460.24MB (482598600) data
                     min         max         mean         sd        +/- sd
time for request:      404us     24.52ms     15.93ms      1.45ms    87.90%
time for connect:        0us         0us         0us         0us     0.00%
time to 1st byte:        0us         0us         0us         0us     0.00%
req/s           :    6186.64     6211.60     6199.00        5.18    65.00%
```

With this change:

```
finished in 70.02s, 642103.33 req/s, 21.43MB/s
requests: 38526200 total, 38526200 started, 38526200 done, 38526200 succeeded, 0 failed, 0 errored, 0 timeout
status codes: 38526200 2xx, 0 3xx, 0 4xx, 0 5xx
traffic: 1.26GB (1348417000) total, 42.39MB (44444900) headers (space savings 90.00%), 466.25MB (488893900) data
                     min         max         mean         sd        +/- sd
time for request:      370us     24.89ms     15.52ms      1.35ms    88.02%
time for connect:        0us         0us         0us         0us     0.00%
time to 1st byte:        0us         0us         0us         0us     0.00%
req/s           :    6407.06     6435.19     6419.74        5.62    67.00%
```
2019-07-04 08:46:08 +02:00
Norman Maurer 16b98d370f
Correctly handle http2 upgrades when Http2FrameCodec is used together… (#9318)
Motivation:

In the latest release we introduced Http2MultiplexHandler as a replacement of Http2MultiplexCodec. This did split the frame parsing from the multiplexing to allow a more flexible way to handle frames and to make the code cleaner. Unfortunally we did miss to special handle this in Http2ServerUpgradeCodec and so did not correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...). This did lead to the situation that we did not correctly receive the event on the Http2MultiplexHandler and so did not correctly created the Http2StreamChannel for the upgrade stream. Because of this we ended up with an NPE if a frame was dispatched to the upgrade stream later on.

Modifications:

- Correctly add Http2MultiplexHandler to the pipeline before calling Http2FrameCodec.onHttpServerUpgrade(...)
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9314.
2019-07-04 08:32:41 +02:00
Norman Maurer 1b82474286
Fix NPE caused by re-entrance calls in FlowControlHandler (#9320)
Motivation:

2c99fc0f12 introduced a change that eagly recycles the queue. Unfortunally it did not correct protect against re-entrance which can cause a NPE.

Modifications:

- Correctly protect against re-entrance by adding null checks
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9319.
2019-07-03 19:55:18 +02:00
Carl Mastrangelo ff0045e3e1 Use Table lookup for HPACK decoder (#9307)
Motivation:
Table based decoding is fast.

Modification:
Use table based decoding in HPACK decoder, inspired by
https://github.com/python-hyper/hpack/blob/master/hpack/huffman_table.py

This modifies the table to be based on integers, rather than 3-tuples of
bytes.  This is for two reasons:

1.  It's faster
2.  Using bytes makes the static intializer too big, and doesn't
compile.

Result:
Faster Huffman decoding.  This only seems to help the ascii case, the
other decoding is about the same.

Benchmarks:

```
Before:
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score       Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   426293.636 ±  1444.843  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    57843.738 ±   725.704  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3002.412 ±    16.998  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   412339.400 ±  1128.394  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    58226.870 ±   199.591  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3044.256 ±    10.675  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2082615.030 ±  5929.726  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   10   571640.454 ± 26499.229  ops/s
HpackDecoderBenchmark.decode           false         true   LARGE  thrpt   20    92714.555 ±  2292.222  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1745872.421 ±  6788.840  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   20   490420.323 ±  2455.431  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   20    84536.200 ±   398.714  ops/s

After(bytes):
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score      Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   472649.148 ± 7122.461  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    66739.638 ±  341.607  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3139.773 ±   24.491  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   466933.833 ± 4514.971  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    66111.778 ±  568.326  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3143.619 ±    3.332  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2109995.177 ± 6203.143  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   20   586026.055 ± 1578.550  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1775723.270 ± 4932.057  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   20   493316.467 ± 1453.037  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   10    85726.219 ±  402.573  ops/s

After(ints):
Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode  Cnt        Score       Error  Units
HpackDecoderBenchmark.decode            true         true   SMALL  thrpt   20   615549.006 ±  5282.283  ops/s
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt   20    86714.630 ±   654.489  ops/s
HpackDecoderBenchmark.decode            true         true   LARGE  thrpt   20     3984.439 ±    61.612  ops/s
HpackDecoderBenchmark.decode            true        false   SMALL  thrpt   20   602489.337 ±  5397.024  ops/s
HpackDecoderBenchmark.decode            true        false  MEDIUM  thrpt   20    88399.109 ±   241.115  ops/s
HpackDecoderBenchmark.decode            true        false   LARGE  thrpt   20     3875.729 ±   103.057  ops/s
HpackDecoderBenchmark.decode           false         true   SMALL  thrpt   20  2092165.454 ± 11918.859  ops/s
HpackDecoderBenchmark.decode           false         true  MEDIUM  thrpt   20   583465.437 ±  5452.115  ops/s
HpackDecoderBenchmark.decode           false         true   LARGE  thrpt   20    93290.061 ±   665.904  ops/s
HpackDecoderBenchmark.decode           false        false   SMALL  thrpt   20  1758402.495 ± 14677.438  ops/s
HpackDecoderBenchmark.decode           false        false  MEDIUM  thrpt   10   491598.099 ±  5029.698  ops/s
HpackDecoderBenchmark.decode           false        false   LARGE  thrpt   20    85834.290 ±   554.915  ops/s
```
2019-07-02 20:09:44 +02:00
秦世成 18e4121952 Pre-decompressed DNS record RData that may contain compression pointers (#9311)
Motivation:

When decoding DnsRecord, if the record contains compression pointers, and not all compression pointers are decompressed, but part of the pointers are decompressed. Then when encoding the record, the compressed pointer will point to the wrong location, resulting in bad label problem.

Modification:

Pre-decompressed record RData that may contain compression pointers.

Result:

Fixes #8962
2019-07-02 19:38:50 +02:00
Carl Mastrangelo deea51e609 Disable Huffman encoding for small headers (#9260)
Motivation:

Huffman coding saves only a little space, but has a huge CPU cost

Modification:

Disable huff coding for headers smaller than 512 bytes.  Also, add a
configurable limit to the encoder.

Result:

Faster HPACK

BEFORE:
```
Benchmark                     (duplicates)  (limitToAscii)  (sensitive)  (size)  Mode  Cnt       Score       Error  Units
HpackEncoderBenchmark.encode          true            true         true   SMALL  avgt   10    2572.595 ±    16.184  ns/op
HpackEncoderBenchmark.encode          true            true         true  MEDIUM  avgt   10   19580.815 ±   397.780  ns/op
HpackEncoderBenchmark.encode          true            true         true   LARGE  avgt   10  379456.381 ±  2059.919  ns/op
HpackEncoderBenchmark.encode          true            true        false   SMALL  avgt   10     730.579 ±     8.116  ns/op
HpackEncoderBenchmark.encode          true            true        false  MEDIUM  avgt   10    2087.590 ±    84.644  ns/op
HpackEncoderBenchmark.encode          true            true        false   LARGE  avgt   10   11725.228 ±    89.298  ns/op
HpackEncoderBenchmark.encode          true           false         true   SMALL  avgt   10     555.971 ±     5.120  ns/op
HpackEncoderBenchmark.encode          true           false         true  MEDIUM  avgt   10    2831.874 ±    41.801  ns/op
HpackEncoderBenchmark.encode          true           false         true   LARGE  avgt   10   36054.025 ±   179.504  ns/op
HpackEncoderBenchmark.encode          true           false        false   SMALL  avgt   10     340.337 ±     3.313  ns/op
HpackEncoderBenchmark.encode          true           false        false  MEDIUM  avgt   10    1006.817 ±     8.942  ns/op
HpackEncoderBenchmark.encode          true           false        false   LARGE  avgt   10    8784.168 ±   164.014  ns/op
HpackEncoderBenchmark.encode         false            true         true   SMALL  avgt   10    2561.934 ±    27.056  ns/op
HpackEncoderBenchmark.encode         false            true         true  MEDIUM  avgt   10   22061.105 ±   154.533  ns/op
HpackEncoderBenchmark.encode         false            true         true   LARGE  avgt   10  435744.897 ±  8853.388  ns/op
HpackEncoderBenchmark.encode         false            true        false   SMALL  avgt   10    2737.683 ±    47.142  ns/op
HpackEncoderBenchmark.encode         false            true        false  MEDIUM  avgt   10   22385.146 ±    98.430  ns/op
HpackEncoderBenchmark.encode         false            true        false   LARGE  avgt   10  408159.698 ± 12044.931  ns/op
HpackEncoderBenchmark.encode         false           false         true   SMALL  avgt   10     544.213 ±     3.279  ns/op
HpackEncoderBenchmark.encode         false           false         true  MEDIUM  avgt   10    2908.978 ±    31.026  ns/op
HpackEncoderBenchmark.encode         false           false         true   LARGE  avgt   10   36471.262 ±  1044.010  ns/op
HpackEncoderBenchmark.encode         false           false        false   SMALL  avgt   10     609.305 ±     4.371  ns/op
HpackEncoderBenchmark.encode         false           false        false  MEDIUM  avgt   10    3223.946 ±    23.505  ns/op
HpackEncoderBenchmark.encode         false           false        false   LARGE  avgt   10   39975.152 ±   655.196  ns/op
```

AFTER:
```
NEW AFTER

Benchmark                     (duplicates)  (limitToAscii)  (sensitive)  (size)  Mode  Cnt     Score     Error  Units
HpackEncoderBenchmark.encode          true            true         true   SMALL  avgt    5   379.473 ± 133.815  ns/op
HpackEncoderBenchmark.encode          true            true         true  MEDIUM  avgt    5  1118.772 ±  89.258  ns/op
HpackEncoderBenchmark.encode          true            true         true   LARGE  avgt    5  5366.828 ±  89.746  ns/op
HpackEncoderBenchmark.encode          true            true        false   SMALL  avgt    5   284.401 ±   2.088  ns/op
HpackEncoderBenchmark.encode          true            true        false  MEDIUM  avgt    5   922.805 ±  10.796  ns/op
HpackEncoderBenchmark.encode          true            true        false   LARGE  avgt    5  8727.831 ± 462.138  ns/op
HpackEncoderBenchmark.encode          true           false         true   SMALL  avgt    5   337.093 ±  22.585  ns/op
HpackEncoderBenchmark.encode          true           false         true  MEDIUM  avgt    5   693.689 ±  16.351  ns/op
HpackEncoderBenchmark.encode          true           false         true   LARGE  avgt    5  5616.786 ±  98.647  ns/op
HpackEncoderBenchmark.encode          true           false        false   SMALL  avgt    5   286.708 ±  13.765  ns/op
HpackEncoderBenchmark.encode          true           false        false  MEDIUM  avgt    5   906.279 ±  32.338  ns/op
HpackEncoderBenchmark.encode          true           false        false   LARGE  avgt    5  8304.736 ± 128.584  ns/op
HpackEncoderBenchmark.encode         false            true         true   SMALL  avgt    5   351.381 ±  15.547  ns/op
HpackEncoderBenchmark.encode         false            true         true  MEDIUM  avgt    5  1188.166 ±   7.023  ns/op
HpackEncoderBenchmark.encode         false            true         true   LARGE  avgt    5  6876.009 ±  48.117  ns/op
HpackEncoderBenchmark.encode         false            true        false   SMALL  avgt    5   434.759 ±   8.619  ns/op
HpackEncoderBenchmark.encode         false            true        false  MEDIUM  avgt    5   954.588 ±  58.514  ns/op
HpackEncoderBenchmark.encode         false            true        false   LARGE  avgt    5  8534.017 ± 552.597  ns/op
HpackEncoderBenchmark.encode         false           false         true   SMALL  avgt    5   223.713 ±   4.823  ns/op
HpackEncoderBenchmark.encode         false           false         true  MEDIUM  avgt    5  1181.538 ±  11.851  ns/op
HpackEncoderBenchmark.encode         false           false         true   LARGE  avgt    5  6670.830 ± 267.927  ns/op
HpackEncoderBenchmark.encode         false           false        false   SMALL  avgt    5   424.609 ±  27.477  ns/op
HpackEncoderBenchmark.encode         false           false        false  MEDIUM  avgt    5  1003.578 ±  53.991  ns/op
HpackEncoderBenchmark.encode         false           false        false   LARGE  avgt    5  8428.932 ± 102.838  ns/op
```
2019-07-01 21:00:20 +02:00
Norman Maurer 131be58f48
Correctly take length of ByteBufInputStream into account for readLine… (#9310)
* Correctly take length of ByteBufInputStream into account for readLine() / readByte()

Motivation:

ByteBufInputStream did not correctly take the length into account when validate bounds for readLine() / readByte() which could lead to read more then allowed.

Modifications:

- Correctly take length into account
- Add unit tests
- Fix existing unit test

Result:

Correctly take length of ByteBufInputStream into account.
Related to https://github.com/netty/netty/pull/9306.
2019-07-01 20:55:23 +02:00
Dmitriy Dumanskiy 5ded050f7b #7285 Improved "Discarded inbound message" warning (#9286)
Motivation:

On servers with many pipelines or dynamic pipelines, it is easy for end user to make mistake during pipeline configuration. Current message:

`Discarded inbound message PooledUnsafeDirectByteBuf(ridx: 0, widx: 2, cap: 2) that reached at the tail of the pipeline. Please check your pipeline configuration.`

Is not always meaningful and doesn't allow to find the wrong pipeline quickly.

Modification:

Added additional log placeholder that identifies pipeline handlers and channel info. This will allow for the end users quickly find the problem pipeline.

Result:

Meaningful warning when the message reaches the end of the pipeline. Fixes #7285
2019-07-01 20:38:58 +02:00
xiaoheng1 f8c1f350db Fix public int read() throws IOException method exceeds the limit of length (#9306)
Motivation:

buffer.isReadable() should not be used to limit the amount of data that can be read as the amount may be less then was is readable.

Modification:

- Use  available() which takes the length into account
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9305
2019-07-01 15:57:34 +02:00
Cory Benfield 14154074f2 Don't loop over TLS records for SNI (#7479)
Motivation:

The AbstractSniHandler previously was willing to tolerate up to three
non-handshake records before a ClientHello that contained an SNI
extension field. This is, so far as I can tell, completely
unnecessary: no TLS implementation will be sending alerts or change
cipher spec messages before ClientHello.

Given that it was not possible to determine why this loop is in
the code to begin with, it's probably just best to remove it.

Modifications:

Remove the for loop.

Result:

The AbstractSniHandler will more rapidly determine whether it should
pass the records on to the default SSL handler.

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
2019-07-01 11:22:55 +02:00
秦世成 4596f9e139 Fix the issue of incorrectly calculating the number of dump lines when using PrettyDump in ByteBufUtil (#9304)
Motivation:

Fix the issue of incorrectly calculating the number of dump rows when using prettyHexDumpmethod in ByteBufUtil. The way to find the remainder is either length % 16 or length & 15

Modification:

Fixed the way to calculate the remainder

Result:

Fixed #9301
2019-07-01 08:35:18 +02:00
Carl Mastrangelo 262ced7ce4 Unconditionally initialize sockaddrs in epoll linuxsocket (#9299)
Motivation:

Compiling with -Werror,-Wuninitialized complains about the sockaddrs being uninitialized.
I believe this is because the init function netty_unix_socket_initSockaddr is in a
separate compilation unit.  Since this code isn't on the criticial path, it's easy
to just memset the variables rather than suppress the warning.

Modification:
Always clear the sockaddrs, even if they will be initialized later.

Result:
Able to compile with warnings turned on
2019-06-29 12:16:58 +02:00
Norman Maurer 05c2967e4a
Http2FrameCodecBuilder.autoAckSettingsFrame(...) must be public (#9295)
Motivation:

b3dba317d7 added AbstractHttp2ConnectionBuilder.autoAckSettingsFrame(...) as protected method and made it public for Http2MultiplexCodecBuilder. Unfortunally it did miss to also make it public in Http2FrameCodecBuilder

Modifications:

Correctly override autoAckSettingsFrame in Http2FrameCodecBuilder and so make it usable when building Http2FrameCodec.

Result:

Be able to also configure autoAckSettingsFrame when Http2FrameCodec is used.
2019-06-29 09:23:38 +02:00
Norman Maurer 47eb9c3bf4
Ensure sc.close() is executed before FixedChannelPoolTest.testCloseAsync() returns (#9298)
Motivation:

We observed some test-failues sometimes in the CI which happened if sc.close() was not completed before the next test did run. If this happened we would fail the bind(...) as the LocalAddress was still in use.

Modifications:

Await the close before return

Result:

Fixes race in testsuite which resulted in FixedChannelPoolTest.testAcquireNewConnection to fail if FixedChannelPoolTest.testCloseAsync() did run before it.
2019-06-29 09:21:11 +02:00
jimin ee8206cb26 optimize some code (#9289)
Motivation:

There is some manual coping of elements of Collections which can be replaced by Collections.addAll(...) and also some unnecessary semicolons.

Modifications:

- Simplify branches
- Use Collections.addAll
- Code cleanup

Result:

Code cleanup
2019-06-28 13:48:23 +02:00
Aleksey Yeschenko 6b6475fb56 Prevent ByteToMessageDecoder from overreading when !isAutoRead (#9252)
Motivation:

ByteToMessageDecoder only looks at the last channelRead() in the batch
of channelRead()-s when determining whether or not it should call
ChannelHandlerContext#read() to consume more data when !isAutoRead. This
will lead to read() calls issued unnecessaily and unprompted if the very
last channelRead() didn't result in at least one decoded message, even
if there have been messages decoded from other channelRead()-s in the
current batch.

Modifications:

Track decode outcomes for the entire batch of channelRead() calls and
only issue a read in BTMD if the entire batch of channelRead() calls
yielded no complete messages.

Result:

ByteToMessageDecoder will no longer overread when the very last read
yielded no message, but the batch of reads did.
2019-06-28 13:43:25 +02:00
Farid Zakaria efe40ac17d Add a test for OpenSslEngine which decrypts traffic (#8699)
Motivation:
I've introduced netty/netty-tcnative#421 that introduced exposing OpenSSL master key & client/server
random values with the purpose of allowing someone to log them to debug the traffic via auxiliary tools like Wireshark (see also #8653)

Modification:
Augmented OpenSslEngineTest to include a test which manually decrypts the TLS ciphertext
after exposing the masterkey + client/server random. This acts as proof that the tc-native new methods work correctly!

Result:

More tests

Signed-off-by: Farid Zakaria <farid.m.zakaria@gmail.com>
2019-06-28 13:41:24 +02:00
root 5b58b8e6b5 [maven-release-plugin] prepare for next development iteration 2019-06-28 05:57:21 +00:00
root 35e0843376 [maven-release-plugin] prepare release netty-4.1.37.Final 2019-06-28 05:56:28 +00:00
秦世成 f489404fa1 HAProxyMessageDecoder not correctly handle delimiter in all cases (#9282)
Motivation:

In line base decoders, lines are split by delimiter, but the delimiter may be \r\n or \r, so in decoding, if findEndOfLine finds delimiter of a line, the length of the delimiter may be 1 or 2, instead of DELIMITER_LENGTH, where the value is fixed to 2.
The second problem is that if the data to be decoded is too long, the decoder will discard too long data, and needs to record the length of the discarded bytes. In the original implementation, the discarded bytes are not accumulated, but are assigned to the currently discarded bytes.
Modification:

Modifications:
Dynamic calculation of the length of delimiter.
In discarding mode, add up the number of characters discarded each time.

Result:

Correctly handle all delimiters and also correctly handle too long frames.
2019-06-27 21:58:46 +02:00
jimin 51843d8e8e MqttConnectPayload.toString() should use Arrays.toString() instead of [].toString() (#9292)
Motivation:

The toString() method should use Arrays.toString() to produce a meaningful String representation for arrays.

Modification:

Use Arrays.toString()

Result:

More useful toString() implementation
2019-06-27 21:55:02 +02:00
Norman Maurer 7dff856b1f
Don't propagate Http2WindowUpdateFrame to the child channel / propagate Http2ResetFrame as user event when using Http2MultiplexHandler (#9290)
Motivation:

We should not propage Http2WindowUpdateFrames to the child channels at all as these are not really use-ful and should not be flow-controlled via `read()` anyway.  In the other hand Http2ResetFrame is very useful but should be propagated via an user event so the user is aware of it directly even if the user stops reading.

Modifications:

- Dont propagate Http2WindowUpdateFrames when using Http2MultiplexHandler
- Use user event for Http2ResetFrame when using Http2MultiplexHandler
- Adjust javadoc of Http2MultiplexHandler
- Add unit tests

Result:

Fixes https://github.com/netty/netty/pull/8889 and https://github.com/netty/netty/pull/7635
2019-06-27 21:52:52 +02:00
Norman Maurer df46a349e0
Reduce coupeling between Http2FrameCodec and Http2Multiplex* (#9273)
Motivation:

Http2MultiplexCodec and Http2MultiplexHandler had a very strong coupling with Http2FrameCodec which we can reduce easily. The end-goal should be to have no coupling at all.

Modifications:

- Reduce coupling by move some common logic to Http2CodecUtil
- Move logic to check if a stream may have existed before to Http2FrameCodec
- Use ArrayDeque as replacement for custom double-linked-list which makes the code a lot more readable
- Use WindowUpdateFrame to signal consume bytes (just as users do when they use Http2FrameCodec directly)

Result:

Less coupling and cleaner code.
2019-06-27 21:43:31 +02:00
jimin 856f1185e1 All override methods must be added @override (#9285)
Motivation:

Some methods that either override others or are implemented as part of implementation an interface did miss the `@Override` annotation

Modifications:

Add missing `@Override`s

Result:

Code cleanup
2019-06-27 13:51:26 +02:00
jimin 9621a5b981 remove unused imports (#9287)
Motivation:

Some imports are not used

Modification:

remove unused imports

Result:

Code cleanup
2019-06-26 21:08:31 +02:00
jimin 6bd8f0502d Call to ‘asList’ with only one argument could be replaced with ‘singletonList’ (#9288)
Motivation:

asList should only be used if there are multiple elements.

Modification:

Call to asList with only one argument could be replaced with singletonList

Result:

Cleaner code and a bit of memory savings
2019-06-26 21:06:48 +02:00
Alex Blewitt 52169cba95 Replace accumulation with blackhole.consume (#9275)
Motivation:

SpotJMHBugs reports that accumulating a value as a way of eliding dead code
elimination may be inadvisable, as discussed in
`JMHSample_34_SafeLooping::measureWrong_2`. Change the test so that it consumes
the response with `Blackhole::consume` instead.

Modifications:

- Replace addition of results with explicit `blackhole.consume()` call

Result:

Tests work as before, but with different benchmark numbers.
2019-06-25 21:47:07 +02:00
Francesco Nigro 672fa0c779 Documented non-usage of BlackHole::consume on ByteBufAccessBenchmark (#9279)
Motivation:

Some JMH benchmarks need additional explanations to motivate
specific code choices.

Modifications:

Introduced comment to explai why calling BlackHole::consume
in a loop is not always the right choice for some benchmark.

Result:

The relevant method shows a comment that warn about changing
the code to introduce BlackHole::consume in the loop.
2019-06-25 14:52:21 +02:00
Graeme Rocher 18b7bdff12 Add null check to isSkippable. Fixes #9278 (#9280)
Motivation:

Currently GraalVM substrate returns null for reflective calls if the reflection access is not declared up front.

A change introduced in Netty 4.1.35 results in needing to register every Netty handler for reflection. This complicates matters as it is difficult to know all the possible handlers that need to be registered.

Modification:

This change adds a simple
null check such that Netty does not break on GraalVM substrate without the reflection information registration.

Result:

Fixes #9278
2019-06-25 14:45:30 +02:00
Stephane Landelle 039087ed47 Don't filter out TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (#9274)
Motivation:

TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 is supported since Java 8 (see https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html) and belongs to the recommended configurations in many references, eg SSLabs (https://github.com/ssllabs/research/wiki/SSL-and-TLS-Deployment-Best-Practices) or Google Cloud Platform Restricted Profile.

Modifications:

Add TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 to default ciphers list.

Result:

TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 is enabled by default.
2019-06-24 23:11:24 +02:00
Norman Maurer 265c745d9a
EmptyByteBuf.getCharSequence(0,...) must return empty String (#9272)
Motivation:

At the moment EmptyByteBuf.getCharSequence(0,...) will return null while it must return a "".

Modifications:

- Let EmptyByteBuf.getCharSequence(0,...) return ""
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9271.
2019-06-24 21:09:19 +02:00
Norman Maurer 097f422198
Cleanup http2 example code to make clear it is fine to just use ctx directly. (#9276)
Motivation:

In our example we did use pipeline.context(this) to obtain the context of the handler while it was already passed in via ctx. This could confuse users and give the impression that the context is no the same.

Modifications:

Just use ctx directly.

Result:

Fix confusion in example code. This was brought up on stackoverflow:

https://stackoverflow.com/questions/56711128/when-is-a-channelhandlercontext-handed-to-a-channelhandler-not-that-channelhandl
2019-06-24 21:08:02 +02:00
Julien Viet 1ad47282c3 Preserve the original filename when encoding a multipart/form in mixed mode. (#9270)
Motivation:

The HttpPostRequestEncoder overwrites the original filename of file uploads sharing the same name encoded in mixed mode when it rewrites the multipart body header of the previous file. The original filename should be preserved instead.

Modifications:

Change the HttpPostRequestEncoder to reuse the correct filename when the encoder switches to mixed mode. The original test is incorrect and has been modified too, in addition it tests with an extra file upload since the current test was not testing the continuation of a mixed mode.

Result:

The HttpPostRequestEncoder will preserve the original filename of the first fileupload when switching to mixed mode
2019-06-24 10:40:17 +02:00
秦世成 712077cdef Fixed the haproxy message mem leak issue (#9250)
Motivation:

HAProxyMessage should be released as it contains a list of TLV which hold a ByteBuf, otherwise, it may cause memory leaks.

Modification:

- Let HAProxyMessage extend AbstractReferenceCounted
- Adjust tests.

Result:

Fixes #9201
2019-06-24 10:38:58 +02:00
Norman Maurer 307efbe49c
Split multiplexing from frame decoding to allow easier customization of frame processing and better seperation of responsibilities (#9239)
Motivation:

In the past we had the following class hierarchy:

Http2ConnectionHandler --- Http2FrameCodec -- Http2MultiplexCodec

This hierarchy makes it impossible to plug in any code that would like to act on Http2Frame and Http2StreamFrame which can be quite useful for various situations (like metrics, logging etc). Beside this it also made the implementtion very hacky. To allow easier maintainance and also allow more flexible costumizations we should split Http2MultiplexCodec and Http2FrameCode.

Modifications:

- Introduce Http2MultiplexHandler (which is a replacement for Http2MultiplexCodec when used together with Http2FrameCodec)
- Mark Http2MultiplexCodecBuilder and Http2MultiplexCodec as deprecated. People should use Http2FrameCodecBuilder / Http2FrameCodec together with Http2MultiplexHandlder in the future
- Adjust / Add tests
- Adjust examples

Result:

More flexible usage possible and less hacky / coupled implementation for http2 multiplexing
2019-06-24 09:17:15 +02:00
ursa 41c1ab2e82 Bugfix #9257: WebSocketProtocolHandler does NOT support autoRead=false (#9258)
Motivation:

I need to control WebSockets inbound flow manually, when autoRead=false

Modification:

Add missed ctx.read() call into WebSocketProtocolHandler, where read request has been swallowed.

Result:

Fixes #9257
2019-06-24 09:07:57 +02:00
Norman Maurer 517a93d87d Make EventLoopTaskQueueFactory a top-level interface
Motivation:

c9aaa93d83 added the ability to specify an EventLoopTaskQueueFactory but did place it under MultithreadEventLoopGroup while not really belongs there.

Modifications:

Make EventLoopTaskQueueFactory a top-level interface

Result:

More logical code layout.
2019-06-22 07:38:03 +02:00
Norman Maurer 2c99fc0f12
Recycle RecyclableArrayDeque as fast as possible in FlowControlHandler (#9263)
Motivation:

FlowControlHandler does use a recyclable ArrayDeque internally but only recycles it when the channel is closed. We should better recycle it once it is empty.

Modifications:

Recycle the deque as fast as possible

Result:

Less RecyclableArrayDeque instances.
2019-06-22 07:27:04 +02:00
Alex Blewitt 430eeee2f6 Return the result of the list.recycle() call (#9264)
Motivation:

Resolve the issue highlighted by SpotJMHBugs that the creation of the RecyclableArrayList may be elided by the JIT since the result isn't consumed or returned.

Modifications:

Return the result of `list.recycle()` so that the list isn't elided.

Result:

The JMH benchmark shows a change in performance indicating that the prior results of this may be unsound.
2019-06-22 07:22:15 +02:00
Nick Hill 2af769f6dc Subsequence versions of ByteBufUtil#writeUtf8(...) methods (#9224)
Motivation

It would be useful to be able to write UTF-8 encoded subsequence of
CharSequence characters to a ByteBuf without needing to create a
temporary object via CharSequence#subSequence().

Modification

Add overloads of ByteBufUtil writeUtf8, reserveAndWriteUtf8 and
utf8Bytes methods which take explicit subsequence bounds.

Result

More efficient writing of substrings to byte buffers possible
2019-06-21 14:05:35 +02:00
Norman Maurer 9dd1aab482
Fix flaky DnsNameResolverTest.testTruncatedWithTcpFallback (#9262)
Motivation:

testTruncatedWithTcpFallback was flacky as we may end up closing the socket before we could read all data. We should only close the socket after we succesfully read all data.

Modifications:

Move socket.close() to finally block

Result:

Fix flaky test and so make the CI more stable again.
2019-06-21 09:28:51 +02:00
Norman Maurer c9aaa93d83
Allow to specify a EventLoopTaskQueueFactory for various EventLoopGroup implementations (#9247)
Motivation:

Sometimes it is desirable to be able to use a different Queue implementation for the EventLoop of a Channel. This is currently not possible without resort to reflection.

Modifications:

- Add a new constructor to Nio|Epoll|KQueueEventLoopGroup which allows to specify a factory which is used to create the task queue. This was the user can override the default implementation.
- Add test

Result:

Be able to change Queue that is used for the EventLoop.
2019-06-21 09:05:19 +02:00
Nick Hill 6381d0766a De-duplicate PooledByteBuf implementations (#9120)
Motivation

There's quite a lot of duplicate/equivalent logic across the various
concrete ByteBuf implementations. We could take this even further but
for now I've focused on the PooledByteBuf sub-hierarchy.

Modifications

- Move common logic/methods into existing PooledByteBuf abstract
superclass
- Shorten PooledByteBuf.capacity(int) method implementation

Result

Less code to maintain
2019-06-19 20:50:27 +02:00
Kevin Oliver c32c9b4c94 codec-http2: Lazily translate cookies for HTTP/1 (#9251)
Motivation:

For HTTP/2 messages with multiple cookies HttpConversionUtil.addHttp2ToHttpHeaders spends a good portion of time creating throwaway StringBuilders.

Modification:

Handle cookies lazily by using a ThreadLocal StringBuilder and then converting it to the H1 header at the end.

Result:

Less allocations.
2019-06-19 11:03:49 +02:00
Norman Maurer 01cfd78d6d
Try to mark child channel writable again once the parent channel becomes writable (#9254)
Motivation:

f945a071db decoupled the writability state from the flow controller but could lead to the situation of a lot of writability updates events were propagated to the child channels. This change ensure we only take into account if the parent channel becomes writable again before we try to set the child channels to writable.

Modifications:

Only listen for channel writability changes for if the parent channel becomes writable again.

Result:

Less writability updates.
2019-06-18 20:30:31 +02:00
ursa 7fc718db3c WebSocket is closed without an error on protocol violations (#9116)
Motivation:

Incorrect WebSockets closure affects our production system.
Enforced 'close socket on any protocol violation' prevents our custom termination sequence from execution.
Huge number of parameters is a nightmare both in usage and in support (decoders configuration).
Modification:

Fix violations handling - send proper response codes.
Fix for messages leak.
Introduce decoder's option to disable default behavior (send close frame) on protocol violations.
Encapsulate WebSocket response codes - WebSocketCloseStatus.
Encapsulate decoder's configuration into a separate class - WebSocketDecoderConfig.
Result:

Fixes #8295.
2019-06-18 10:05:58 +02:00
Norman Maurer f945a071db
Writability state of http2 child channels should be decoupled from the flow-controller (#9235)
Motivation:

We should decouple the writability state of the http2 child channels from the flow-controller and just tie it to its own pending bytes counter that is decremented by the parent Channel once the bytes were written.

Modifications:

- Decouple writability state of child channels from flow-contoller
- Update tests

Result:

Less coupling and more correct behavior. Fixes https://github.com/netty/netty/issues/8148.
2019-06-18 09:37:59 +02:00
Frédéric Brégier b1fb40e42d Change Scheduled to FixedRate in Traffic Counter (#9245)
Motivation:

Traffic shaping needs more accurate execution than scheduled one. So the
use of FixedRate instead.
Moreover the current implementation tends to create as many threads as
channels use a ChannelTrafficShapingHandlern, which is unnecessary.

Modifications:

Change the executor.schedule to executor.scheduleAtFixedRate in the
start and remove the reschedule call from run monitor thread since it
will be restarted by the Fixed rate executor.
Also fix a minor bug where restart was only doing start() without stop()
before.

Result:

Threads are more stable in number of cached and precision of traffic
shaping is enhanced.
2019-06-18 09:34:48 +02:00
Aleksey Yeschenko 93414db1f3 Fix LZ4 encoder/decoder performance with (default) xxHash32 (#9249)
Motivation:

Lz4FrameEncoder and Lz4FrameDecoder in their default configuration use
an extremely inefficient way to checksum direct byte buffers. In
particular, for every byte checksummed, a single-element byte array is
being allocated and a JNI cal is made, which in some internal testing
makes a 25x difference in total throughput and allocates *a lot* of
garbage.

Modifications:

Lz4XXHash32, an implementation of ByteBufChecksum specifically for use
by Lz4FrameEncoder and Lz4FrameDecoder, is introduced. It utilises
xxHash32 block API which provides a hash() method that accepts a
ByteBuffer as an argument. Lz4FrameEncoder and Lz4FrameDecoder are
modified to use this implementation by default.

Result:

Lz4FrameEncoder and Lz4FrameDecoder perform well again when operating
on direct byte buffers with default checksum configuration; a public
implementation is provided for those who need to override the seed.
2019-06-18 09:29:25 +02:00
Aleksey Yeschenko a2583d0d3c Fix ReflectiveByteBufChecksum with direct buffers (#9244)
Motivation:

ReflectiveByteBufChecksum#update(buf, off, len) ignores provided offset
and length arguments when operating on direct buffers, leading to wrong
byte sequences being checksummed and ultimately incorrect checksum
values (unless checksumming the entire buffer).

Modifications:

Use the provided offset and length arguments to get the correct nio
buffer to checksum; add test coverage exercising the four meaningfully
different offset and length combinations.

Result:

Offset and length are respected and a correct checksum gets calculated;
simple unit test should prevent regressions in the future.
2019-06-17 16:32:12 +02:00
Scott Mitchell 96feca1d23 SslHandler to fail handshake and pending writes if non-application write fails (#9240)
Motivation:
SslHandler must generate control data as part of the TLS protocol, for example
to do handshakes. SslHandler doesn't capture the status of the future
corresponding to the writes when writing this control (aka non-application
data). If there is another handler before the SslHandler that wants to fail
these writes the SslHandler will not detect the failure and we must wait until
the handshake timeout to detect a failure.

Modifications:
- SslHandler should detect if non application writes fail, tear down the
channel, and clean up any pending state.

Result:
SslHandler detects non application write failures and cleans up immediately.
2019-06-16 07:38:33 +02:00
Aleksey Yeschenko a29532df43 Fix ByteBufChecksum optimisation for CRC32 and Adler32 (#9242)
Motivation:

Because of a simple bug in ByteBufChecksum#updateByteBuffer(Checksum),
ReflectiveByteBufChecksum is never used for CRC32 and Adler32, resulting
in direct ByteBuffers being checksummed byte by byte, which is
undesriable.

Modification:

Fix ByteBufChecksum#updateByteBuffer(Checksum) method to pass the
correct argument to Method#invoke(Checksum, ByteBuffer).

Result:

ReflectiveByteBufChecksum will now be used for Adler32 and CRC32 on
Java8+ and direct ByteBuffers will no longer be checksummed on slow
byte-by-byte basis.
2019-06-16 07:32:51 +02:00
Divij Vaidya fa1dedcc0f Make sync close for FixedChannelPool truly synchronous (#9226)
Motivation:

In the current implementation, the synchronous close() method for FixedChannelPool returns
after scheduling the channels to close via a single threaded executor asynchronously. Closing a channel
requires event loop group, however, there might be a scenario when the application has closed
the event loop group after the sync close() completes. In this scenario an exception is thrown
(event loop rejected the execution) when the single threaded executor tries to close the channel.

Modifications:

Complete the close function only after all the channels have been close and introduce
closeAsync() method for cases when the current/existing behaviour is desired.

Result:

Close function would completely when the channels have been closed
2019-06-14 12:01:14 +02:00
Norman Maurer dc2649e95d
Allow to set parent Channel when constructing EmbeddedChannel (#9230)
Motivation:

Sometimes it is beneficial to be able to set a parent Channel in EmbeddedChannel if the handler that should be tested depend on the parent.

Modifications:

- Add another constructor which allows to specify a parent
- Add unit tests

Result:

Fixes https://github.com/netty/netty/issues/9228.
2019-06-08 09:11:31 -07:00
Stephane Landelle 3c36ce6b5c Introduce WebSocketClientHandshaker::absoluteUpgradeUrl, close #9205 (#9206)
Motivation:

When connecting through an HTTP proxy over clear HTTP, user agents must send requests with an absolute url. This hold true for WebSocket Upgrade request.

WebSocketClientHandshaker and subclasses currently always send requests with a relative url, which causes proxies to crash as request is malformed.

Modification:

Introduce a new parameter `absoluteUpgradeUrl` and expose it in constructors and WebSocketClientHandshakerFactory.

Result:

It's now possible to configure WebSocketClientHandshaker so it works properly with HTTP proxies over clear HTTP.
2019-06-07 16:01:10 -07:00
yipulash ac95ff8b63 delete Other "Content-" MIME Header Fields exception (#9122)
delete Other "Content-" MIME Header Fields exception

Motivation:

RFC7578 4.8. Other "Content-" Header Fields

The multipart/form-data media type does not support any MIME header
fields in parts other than Content-Type, Content-Disposition, and (in
limited circumstances) Content-Transfer-Encoding. Other header
fields MUST NOT be included and MUST be ignored.

Modification:

Ignore other Content types.

Result: 

Other "Content-" Header Fields should be ignored no exception
2019-06-07 13:51:25 -07:00
Norman Maurer 165229658b
Add support for loopbackmode and accessing the configured interface when using epoll native transport with multicast (#9218)
Motivation:

We did not have support for enable / disable loopback mode in our native epoll transport and also missed the implemention to access the configured interface.

Modifications:

Add implementation and adjust test to cover it

Result:

More complete multicast support with native epoll transport
2019-06-07 13:44:06 -07:00
Carl Mastrangelo 67ad79d080 Handle missing methods on ChannelHandlerMask (#9221)
Motivation:

When Netty is run through ProGuard, seemingly unused methods are removed.  This breaks reflection, making the Handler skipping throw a reflective error.

Modification:

If a method is seemingly absent, just disable the optimization.

Result:

Dealing with ProGuard sucks infinitesimally less.
2019-06-07 13:39:47 -07:00
Scott Mitchell 643d521d5e
HTTP/2 avoid closing connection when writing GOAWAY (#9227)
Motivation:
b4e3c12b8e introduced code to avoid coupling
close() to graceful close. It also added some code which attempted to infer when
a graceful close was being done in writing of a GOAWAY to preserve the
"connection is closed when all streams are closed behavior" for the child
channel API. However the implementation was too overzealous and may preemptively
close the connection if there are not currently any open streams (and close if
there are any frames which create streams in flight).

Modifications:
- Decouple writing a GOAWAY from trying to infer if a graceful close is being
  done and closing the connection. Even if we could enhance this logic (e.g.
wait to close until the second GOAWAY with no error) it is possible the user
doesn't want the connection to be closed yet. We can add a means for the codec
to orchestrate the graceful close in the future (e.g. write some special "close
the connection when all streams are closed") but for now we can just let the
application handle this.

Result:
Fixes https://github.com/netty/netty/issues/9207
2019-06-06 17:44:12 -07:00
Carl Mastrangelo 9abeaf16fd Properly debounce wakeups (#9191)
Motivation:
The wakeup logic in EpollEventLoop is overly complex

Modification:
* Simplify the race to wakeup the loop
* Dont let the event loop wake up itself (it's already awake!)
* Make event loop check if there are any more tasks after preparing to
sleep.  There is small window where the non-eventloop writers can issue
eventfd writes here, but that is okay.

Result:
Cleaner wakeup logic.

Benchmarks:

```
BEFORE
Benchmark                                   Mode  Cnt       Score      Error  Units
EpollSocketChannelBenchmark.executeMulti   thrpt   20  408381.411 ± 2857.498  ops/s
EpollSocketChannelBenchmark.executeSingle  thrpt   20  157022.360 ± 1240.573  ops/s
EpollSocketChannelBenchmark.pingPong       thrpt   20   60571.704 ±  331.125  ops/s

Benchmark                                   Mode  Cnt       Score      Error  Units
EpollSocketChannelBenchmark.executeMulti   thrpt   20  440546.953 ± 1652.823  ops/s
EpollSocketChannelBenchmark.executeSingle  thrpt   20  168114.751 ± 1176.609  ops/s
EpollSocketChannelBenchmark.pingPong       thrpt   20   61231.878 ±  520.108  ops/s
```
2019-06-04 05:17:23 -07:00
EliyahuStern 6f602cbd14 Resolve the pid field in PeerCredentials of KQueueDomainSocketChannels. (#9219)
Motivation:

This resolves a TODO from the initial transport-native-kqueue implementation, supplying the user with the pid of the local peer client/server process.

Modification:

Inside netty_kqueue_bsdsocket_getPeerCredentials, Call getsockopt with LOCAL_PEERPID and pass it to PeerCredentials constructor.
Add a test case in KQueueSocketTest.

Result:

PeerCredentials now have pid field set. Fixes https://github.com/netty/netty/issues/9213
2019-06-04 05:15:42 -07:00
Jon Chambers f194aedbf0 Close delegate resolver from RoundRobinInetAddressResolver (#9214)
Motivation:

RoundRobinDnsAddressResolverGroup ultimately opens UDP
ports for DNS resolution. Callers likely expect that
RoundRobinDnsAddressResolverGroup#close() will close those
ports, but that is not currently true (see #9212).

Modifications:

Overrode RoundRobinInetAddressResolver#close() to close
the delegate name resolver, which in turn closes any UDP
ports used for name resolution.

Result:

RoundRobinDnsAddressResolverGroup#close() closes UDP ports
as expected. This fixes #9212.
2019-06-04 05:13:44 -07:00
Nick Hill 272f68f48c De-duplicate UnpooledDirectByteBuf/UnpooledUnsafeDirectByteBuf (#9085)
Motivation

While digging around looking at something else I noticed that these
share a lot of logic and it would be nice to reduce that duplication.

Modifications

Have UnpooledUnsafeDirectByteBuf extend UnpooledDirectByteBuf and make
adjustments to ensure existing behaviour remains unchanged.

The most significant addition needed to UnpooledUnsafeDirectByteBuf was
re-overriding the getPrimitive/setPrimitive methods to revert back to
the AbstractByteBuf versions which include bounds checks
(UnpooledDirectByteBuf excludes these as an optimization, relying on
those done by underlying ByteBuffer).

Result

~200 fewer lines, less duplicate logic.
2019-06-03 13:04:10 +02:00
Norman Maurer 7817827324
Allow null sender when using DatagramPacketEncoder (#9204)
Motivation:

It is valid to use null as sender so we should support it when DatagramPacketEncoder checks if it supports the message.

Modifications:

- Add null check
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9199.
2019-06-03 08:44:35 +02:00
Norman Maurer b91889c3db
ByteToMessageDecoder.handlerRemoved(...) should only call fireChannelReadComplete() if fireChannelRead(...) was called before (#9211)
Motivation:

At the moment ByteToMessageDecoder always calls fireChannelReadComplete() when the handler is removed from the pipeline and the cumulation buffer is not null. We should only call it when we also call fireChannelRead(...), which only happens if the cumulation buffer is not null and readable.

Modifications:

Only call fireChannelReadComplete() if fireChannelRead(...) is called before during removal of the handler.

Result:

More correct semantics
2019-06-03 08:43:19 +02:00
Idel Pivnitskiy ec69da9afb Make UnpooledUnsafeHeapByteBuf class public (#9184)
Motivation:

1. Users will be able to use an optimized version of
`UnpooledHeapByteBuf` and override behavior of methods if required.
2. Consistency with `UnpooledDirectByteBuf`, `UnpooledHeapByteBuf`, and
`UnpooledUnsafeDirectByteBuf`.

Modifications:

- Add `public` access modifier to `UnpooledUnsafeHeapByteBuf` class and
ctor;

Result:

Public access for optimized version of `UnpooledHeapByteBuf`.
2019-05-31 07:04:03 +02:00
Norman Maurer f6cf681f90
Don't read from timerfd and eventfd on each EventLoop tick (#9192)
Motivation:

We do not need to issue a read on timerfd and eventfd when the EventLoop wakes up if we register these as Edge-Triggered. This removes the overhead of 2 syscalls and so helps to reduce latency.

Modifications:

- Ensure we register the timerfd and eventfd with EPOLLET flag
- If eventfd_write fails with EAGAIN, call eventfd_read and try eventfd_write again as we only use it as wake-up mechanism.

Result:

Less syscalls and so reducing overhead.

Co-authored-by: Carl Mastrangelo <carl@carlmastrangelo.com>
2019-05-31 06:59:39 +02:00
SplotyCode ede7251ecb Fixed toString() exception in MqttSubscribePayload and MqttUnsubscribePayload (#9202)
Motivation:
The toString() methods of MqttSubscribePayload and MqttUnsubscribePayload are causing exceptions when no topics are set.

Modification:
The toString() methods will not throw Excpetions anymore.

Result:
Fixes #9197
2019-05-31 06:46:50 +02:00
Nick Hill e1a881fa2b Simplify SingleThreadEventExecutor.awaitTermination() implementation (#9081)
Motivation

A Semaphore is currently dedicated to this purpose but a simple
CountDownLatch will do.

Modification

Remove private threadLock Semaphore from SingleThreadEventExecutor and just use a CountDownLatch.

Also eliminate use of PlatformDependent.throwException() in startThread
method, and combine some nested if clauses.

Result

Cleaner EventLoop termination notification.
2019-05-27 16:05:40 +02:00
Norman Maurer 8b04c5ffe7
Set the HOST header in Http2ClientInitializer when trying to start an upgrade request (#9177)
Motivation:

The io.netty.example.http2.helloworld.client.Http2Client example should work in the h2c (HTTP2 cleartext - non-TLS) mode, which is the default for this example unless you set a -Dssl VM param. As we do not set the HOST header some servers do reject the upgrade request.

Modifications:

Set the HOST header

Result:

Fixes https://github.com/netty/netty/issues/9115.
2019-05-27 16:02:38 +02:00
Nick Hill 385dadcfbc Fix redundant or missing checks and other inconsistencies in ByteBuf impls (#9119)
Motivation

There are a few minor inconsistencies / redundant operations in the
ByteBuf implementations which would be good to fix.

Modifications

- Unnecessary ByteBuffer.duplicate() performed in
CompositeByteBuf.nioBuffer(int,int)
- Add missing checkIndex(...) check to
ReadOnlyByteBufferBuf.nioBuffer(int,int)
- Remove duplicate bounds check in
ReadOnlyByteBufferBuf.getBytes(int,byte[],int,int)
- Omit redundant bounds check in
UnpooledHeapByteBuf.getBytes(int,ByteBuffer)

Result

More consistency and slightly less overhead
2019-05-27 15:32:08 +02:00
Norman Maurer e17ce934da
Correctly detect InternetProtocolFamily when EpollDatagramChannel is created with existing FileDescriptor (#9185)
Motivation:

When EpollDatagramChannel is created with an existing FileDescriptor we should detect the correct InternetProtocolFamily.

Modifications:

Obtain the InternetProtocolFamily from the given FD

Result:

Use correct InternetProtocolFamily when EpollDatagramChannel is created via existing FileDescriptor
2019-05-26 20:22:55 +02:00
Steve Buzzard 70731bfa7e Added UDP multicast (with caveats: getInterface, getNetworkInterface, block or loopback-mode-disabled operations).
Motivation:

Provide epoll/native multicast to support high load multicast users (we are using it for a high load telecomm app at my day job).

Modification:

Added support for source specific and any source multicast for epoll transport. Some caveats: no support for disabling loop back mode, retrieval of interface and block operation, all of which tend to be less frequently used.

Result:

Provides epoll transport multicast for common use cases.

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
2019-05-25 08:00:16 +02:00
Norman Maurer 137a3e7137
Do not use static exceptions for websocket handshake timeout (#9174)
Motivation:

f17bfd0f64 removed the usage of static exception instances to reduce the risk of OOME due addSupressed calls. We should do the same for exceptions used to signal handshake timeouts.

Modifications:

Do not use static instances

Result:

No risk of OOME due addSuppressed calls
2019-05-23 08:24:03 +02:00
noSim b11afd28f4 Updated jboss-marshalling dependency to current license (#9172)
Motivation:

The mentioned license for the jboss-marshalling dependency is outdated. The license has moved from LGPL v2.1 to Apache 2.0.
The version used by Netty (1.4.11Final) is on Apache 2.0 see https://github.com/jboss-remoting/jboss-marshalling/blob/1.4.11.Final/LICENSE.txt

Modification:

Updated NOTICE file with correct license for jboss-marshalling.

Result:

NOTICE file shows correct license.
2019-05-23 07:21:11 +02:00
Nick Hill 8ce3d52c0e OpenSsl.USE_KEYMANAGER_FACTORY incorrectly set to false with BoringSSL (#9175)
Motivation

SSL unit tests started failing for me (RHEL 7.6) after #9162. It looks
like the intention was to prevent disable use of the
io.netty.handler.ssl.openssl.useKeyManagerFactory property when using
BoringSSL, but it now gets set to false in that case rather than the
prior/non-BoringSSL default of true.

Modification

Set useKeyManagerFactory to true rather than false in BoringSSL case
during static init of OpenSSl class.

Result

Tests pass again.
2019-05-23 07:09:55 +02:00
Nick Hill 128403b492 Introduce ByteBuf.maxFastWritableBytes() method (#9086)
Motivation

ByteBuf capacity is automatically increased as needed up to maxCapacity
when writing beyond the buffer's current capacity. However there's no
way to tell in general whether such an increase will result in a
relatively costly internal buffer re-allocation.

For unpooled buffers it always does, in pooled cases it depends on the
size of the associated chunk of allocated memory, which I don't think is
currently exposed in any way.

It would sometimes be useful to know where this limit is when making
external decisions about whether to reuse or preemptively reallocate.

It would also be advantageous to take this limit into account when
auto-increasing the capacity during writes, to defer such reallocation
until really necessary.

Modifications

Introduce new AbstractByteBuf.maxFastWritableBytes() method which will
return a value >= writableBytes() and <= maxWritableBytes().

Make use of the new method in the sizing decision made by the
AbstractByteBuf.ensureWritable(...) methods.

Result

Less reallocation/copying.
2019-05-22 20:11:24 +02:00
Vojin Jovanovic 3eff1dbc1b Remove deprecated GraalVM native-image flags (#9118)
Motivation:

The first final version of GraalVM was released which deprecated some flags. We should use the new ones.

Modifications:

Removes the use of deprecated GraalVM native-image flags
Adds a flag to initialize netty at build time.

Result:

Do not use deprecated flags
2019-05-22 19:20:54 +02:00
Norman Maurer 224d5fafaf
Correctly detect that KeyManagerFactory is not supported when using OpenSSL 1.1.0+ (#9170)
Motivation:

How we tried to detect if KeyManagerFactory is supported was not good enough for OpenSSL 1.1.0+ as it partly provided the API but not all of what is required.

This then lead to failures like:

[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.102 s <<< FAILURE! - in io.netty.channel.epoll.EpollDomainSocketStartTlsTest
[ERROR] initializationError(io.netty.channel.epoll.EpollDomainSocketStartTlsTest)  Time elapsed: 0.016 s  <<< ERROR!
javax.net.ssl.SSLException: failed to set certificate and key
	at io.netty.handler.ssl.ReferenceCountedOpenSslServerContext.newSessionContext(ReferenceCountedOpenSslServerContext.java:130)
	at io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:353)
	at io.netty.handler.ssl.OpenSslServerContext.<init>(OpenSslServerContext.java:334)
	at io.netty.handler.ssl.SslContext.newServerContextInternal(SslContext.java:468)
	at io.netty.handler.ssl.SslContextBuilder.build(SslContextBuilder.java:457)
	at io.netty.testsuite.transport.socket.SocketStartTlsTest.data(SocketStartTlsTest.java:93)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.runners.Parameterized.allParameters(Parameterized.java:280)
	at org.junit.runners.Parameterized.<init>(Parameterized.java:248)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
	at org.junit.internal.builders.AnnotatedBuilder.buildRunner(AnnotatedBuilder.java:104)
	at org.junit.internal.builders.AnnotatedBuilder.runnerForClass(AnnotatedBuilder.java:86)
	at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
	at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:26)
	at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:59)
	at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:33)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:362)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Caused by: java.lang.Exception: Requires OpenSSL 1.0.2+
	at io.netty.internal.tcnative.SSLContext.setCertificateCallback(Native Method)
	at io.netty.handler.ssl.ReferenceCountedOpenSslServerContext.newSessionContext(ReferenceCountedOpenSslServerContext.java:126)
	... 32 more

Modifications:

Also try to set the certification callback and only if this works as well mark KeyManagerFactory support as enabled.

Result:

Also correctly work when OpenSSL 1.1.0 is used.
2019-05-22 19:07:19 +02:00
秦世成 5ffac03f1e Support handshake timeout in websocket handlers (#8856)
Motivation:

Support handshake timeout option in websocket handlers. It makes sense to limit the time we need to move from `HANDSHAKE_ISSUED` to `HANDSHAKE_COMPLETE` states when upgrading to WebSockets

Modification:

- Add `handshakeTimeoutMillis` option in `WebSocketClientProtocolHandshakeHandler`  and `WebSocketServerProtocolHandshakeHandler`.
- Schedule a timeout task, the task will trigger user event `HANDSHAKE_TIMEOUT` if the handshake timed out.

Result:

Fixes issue https://github.com/netty/netty/issues/8841
2019-05-22 12:37:28 +02:00
Nick Hill 2ca526fac6 Ensure "full" ownership of msgs passed to EmbeddedChannel.writeInbound() (#9058)
Motivation

Pipeline handlers are free to "take control" of input buffers if they have singular refcount - in particular to mutate their raw data if non-readonly via discarding of read bytes, etc.

However there are various places (primarily unit tests) where a wrapped byte-array buffer is passed in and the wrapped array is assumed not to change (used after the wrapped buffer is passed to EmbeddedChannel.writeInbound()). This invalid assumption could result in unexpected errors, such as those exposed by #8931.

Modifications

Anywhere that the data passed to writeInbound() might be used again, ensure that either:
- A copy is used rather than wrapping a shared byte array, or
- The buffer is otherwise protected from modification by making it read-only

For the tests, copying is preferred since it still allows the "mutating" optimizations to be exercised.

Results

Avoid possible errors when pipeline assumes it has full control of input buffer.
2019-05-22 12:08:49 +02:00
Carl Mastrangelo dea4e33c52 Don't double record stacktrace in Annotated*Exception (#9117)
Motivation:
When initializing the AnnotatedSocketException in AbstractChannel, both
the cause and the stack trace are set, leaving a trailing "Caused By"
that is compressed when printing the trace.

Modification:
Don't include the stack trace in the exception, but leave it in the cause.

Result:
Clearer stack trace
2019-05-22 12:06:30 +02:00
Nick Hill 507e0a05b5 Fix possible unsafe sharing of internal NIO buffer in CompositeByteBuf (#9169)
Motivation

A small thread-safety bug was introduced during the internal
optimizations of ComponentByteBuf made a while back in #8437. When there
is a single component which was added as a slice,
internalNioBuffer(int,int) will currently return the unwrapped slice's
un-duplicated internal NIO buffer. This is not safe since it could be
modified concurrently with other usage of that parent buffer.

Modifications

Delegate internalNioBuffer to nioBuffer in this case, which returns a
duplicate. This matches what's done in derived buffers in general
(for the same reason). Add unit test.

Result

Fixed possible thread-safety bug
2019-05-22 11:07:06 +02:00
Fabien Renaud 52c5389190 codec-memcache: copy metadata in binary full request response (#9160)
Motivations
-----------
Calling `copy()`, `duplicate()` or `replace()` on `FullBinaryMemcacheResponse`
or `FullBinaryMemcacheRequest` instances should copy status, opCode, etc.
that are defined in `AbstractBinaryMemcacheMessage`.

Modifications
-------------
 - Modified duplicate, copy and replace methods in
DefaultFullBinaryMemcacheRequest and DefaultFullBinaryMemcacheResponse
to always copy metadata from parent classes.
 - Unit tests verifying duplicate, copy and replace methods for
DefaultFullBinaryMemcacheRequest and DefaultFullBinaryMemcacheResponse
copy buffers and metadata as expected.

Result
------
Calling copy(), duplicate() or replace() methods on
DefaultFullBinaryMemcacheRequest or DefaultFullBinaryMemcacheResponse
produces valid copies with all expected metadata.

Fixes #9159
2019-05-22 11:05:52 +02:00
Julien Viet e348bd9217 KQueueEventLoop | EpollEventLoop may incorrectly update registration when FD is reused.
Motivation:

The current KQueueEventLoop implementation does not process concurrent domain socket channel registration/unregistration in the order they actual
happen since unregistration are delated by an event loop task scheduling. When a domain socket is closed, it's file descriptor might be reused
quickly and therefore trigger a new channel registration using the same descriptor.

Consequently the KQueueEventLoop#add(AbstractKQueueChannel) method will overwrite the current inactive channels having the same descriptor
and the delayed KQueueEventLoop#remove(AbstractKQueueChannel) will remove the active channel that replaced the inactive one.

As active channels are registered, events for this file descriptor won't be processed anymore and the channels will never be closed.

The same problem can also happen in EpollEventLoop. Beside this we also may never remove the AbstractEpollChannel from the internal map
when it is unregistered which will prevent it from be GC'ed

Modifications:

- Change logic of native KQueue and Epoll implementations to ensure we correctly handle the case of FD reuse
- Only try to update kevent / epoll if the Channel is still open (as otherwise it will be handled by kqueue / epoll itself)
- Correctly remove AbstractEpollChannel from internal map in all cases
- Make implementation of closeAll() consistent for Epoll and KQueueEventLoop

Result:

KQueue and Epoll native transports correctly handle FD reuse

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
2019-05-22 09:23:09 +02:00
Norman Maurer af98b62150
Log deprecation info message when using 'io.netty.handler.ssl.openssl.useKeyManagerFactory' and ignore it when using BoringSSL (#9162)
Motivation:

When we added support for KeyManagerFactory we also allowed to disable it to make the change less risky. This was done years ago and so there is really no need to use the property anyway.
Unfortunally due a change in netty-tcnative it is even not supported anymore when using BoringSSL.

Modifications:

- Log an info message to tell users that 'io.netty.handler.ssl.openssl.useKeyManagerFactory' is deprecated when it is used
- Ignore 'io.netty.handler.ssl.openssl.useKeyManagerFactory' when BoringSSL is used.

Result:

Fixes https://github.com/netty/netty/issues/9147.
2019-05-22 08:40:19 +02:00
Tim Brooks 2dc686ded1 Prefer direct io buffers if direct buffers pooled (#9167)
Motivation

Direct buffers are normally preferred when interfacing with raw
sockets. Currently netty will only return direct io buffers (for reading
from a channel) when a platform has unsafe. However, this is
inconsistent with the write-side (filterOutboundMessage) where a direct
byte buffer will be returned if pooling is enabled. This means that
environments without unsafe (and no manual netty configurations) end up
with many pooled heap byte buffers for reading, many pooled direct byte
buffers for writing, and jdk pooled byte buffers (for reading).

Modifications

This commit modifies the AbstractByteBufAllocator to return a direct
byte buffer for io handling when the platform has unsafe or direct byte
buffers are pooled.

Result:

Use direct buffers when direct buffers are pooled for IO.
2019-05-22 07:32:41 +02:00
Norman Maurer afdc77f9d3
Update to latest JDK13 EA release (#9166)
Motivation:

We should use the latest EA release when trying to compile with JDK13.

Modifications:

Update to latest release

Result:

Test with latest release on the CI
2019-05-21 20:10:09 +02:00
秦世成 18f27db194 Format code to align unaligned code. (#9062)
Motivation:
Format code to align unaligned code.

Modification:
Reformat the code

Result:

Cleaner code
2019-05-20 12:07:02 +02:00
Norman Maurer 9c6365ee95
Only try to use reflection to access default nameservers when using Java8 and lower (#9157)
Motivation:

We should only try to use  reflection to access default nameservers when using Java8 and lower as otherwise we will produce an Illegal reflective access warning like:

WARNING: Illegal reflective access by io.netty.resolver.dns.DefaultDnsServerAddressStreamProvider

Modifications:

Add Java version check before try to use reflective access.

Result:

No more warning when Java9+ is used.
2019-05-18 08:21:33 +02:00
Norman Maurer f17bfd0f64
Only use static Exception instances when we can ensure addSuppressed … (#9152)
Motivation:

OOME is occurred by increasing suppressedExceptions because other libraries call Throwable#addSuppressed. As we have no control over what other libraries do we need to ensure this can not lead to OOME.

Modifications:

Only use static instances of the Exceptions if we can either dissable addSuppressed or we run on java6.

Result:

Not possible to OOME because of addSuppressed. Fixes https://github.com/netty/netty/issues/9151.
2019-05-17 22:23:02 +02:00
Norman Maurer c565805f1b
Do not manually reset HttpObjectDecoder in HttpObjectAggregator.handleOversizedMessage(...) (#9017) (#9156)
Motivation:

We did manually call HttpObjectDecoder.reset() in HttpObjectAggregator.handleOversizedMessage(...) which is incorrect and will prevent correct parsing of the next message.

Modifications:

- Remove call to HttpObjectDecoder.reset()
- Add unit test

Result:

Verify that we can correctly parse the next request after we rejected a request.
2019-05-17 21:18:03 +02:00
Norman Maurer 1672b6d12c
Add support for TCP fallback when we receive a truncated DnsResponse (#9139)
Motivation:

Sometimes DNS responses can be very large which mean they will not fit in a UDP packet. When this is happening the DNS server will set the TC flag (truncated flag) to tell the resolver that the response was truncated. When a truncated response was received we should allow to retry via TCP and use the received response (if possible) as a replacement for the truncated one.

See https://tools.ietf.org/html/rfc7766.

Modifications:

- Add support for TCP fallback by allow to specify a socketChannelFactory / socketChannelType on the DnsNameResolverBuilder. If this is set to something different then null we will try to fallback to TCP.
- Add decoder / encoder for TCP
- Add unit tests

Result:

Support for TCP fallback as defined by https://tools.ietf.org/html/rfc7766 when using DnsNameResolver.
2019-05-17 14:37:11 +02:00
Norman Maurer ccf56706f8
Add missing assume checks to skip tests if KeyManagerFactory can not be used (#9148)
Motivation:

Depending on what OpenSSL library version we use / system property that is set we need to skip tests that use KeyManagerFactory.

Modifications:

Add missing assume checks for tests that use KeyManagerFactory.

Result:

All tests pass even if KeyManagerFactory is not supported
2019-05-15 07:24:01 +02:00
秦世成 cf2f1f54b6 Replace all logic that checks Null with the ObjectUtil utility class (#9145)
Motivation:

Clean the code , replace all logic that checks Null with the ObjectUtil utility class in bootstrap package

Modification:
Replace all logic that checks null with the ObjectUtil utility class

Result:

Less verbose code.
2019-05-13 19:53:45 +02:00
RoganDawes 3221bf6854 Remove the Handler only after it has initialized the channel (#9132)
Motivation:

Previously, any 'relative' pipeline operations, such as
ctx.pipeline().replace(), .addBefore(), addAfter(), etc
would fail as the handler was not present in the pipeline.

Modification:

Used the pattern from ChannelInitializer when invoking configurePipeline().

Result:

Fixes #9131
2019-05-13 13:49:17 +02:00
Nick Hill cb85e03d72 AsciiString.lastIndexOf(...) is implemented incorrectly (#9103)
Motivation

@xiaoheng1 reported incorrect behaviour of AsciiString.lastIndexOf in
#9099. Upon closer inspection it appears that it was never implemented
correctly and searches between the provided index and the end of the
string similar to indexOf(...), rather than between the provided index
and the beginning of the string as the javadoc states (and in line with
java.lang.String).

Modifications

Fix AsciiString.lastIndexOf implementation and corresponding unit tests
to behave the same as the equivalent String methods.

Result

Fixes #9099
2019-05-13 07:03:32 +02:00
Nick Hill 60de092e36 Fix incorrect behavior of ReadOnlyByteBufferBuf.getBytes(int,ByteBuffer) (#9125)
* Fix incorrect behavior of ReadOnlyByteBufferBuf.getBytes(int,ByteBuffer)

Motivation

It currently will succeed when the destination is larger than the source
range, but the ByteBuf javadoc states this should be a failure, as is
the case with all the other implementations.

Modifications

- Fix logic to fail the bounds check in this case
- Remove explicit null check which isn't done in any equivalent method
- Add unit test

Result

More correct/consistent behaviour
2019-05-13 07:00:06 +02:00
Norman Maurer 6ee8b651e6
DnsNameResolver.resolveAll(DnsQuestion) should not try to filter duplicates (#9141)
Motivation:

https://github.com/netty/netty/pull/9021 did apply some changes to filter out duplicates InetAddress when calling resolveAll(...) to mimic JDK behaviour. Unfortunally this also introduced a regression as we should not filter duplicates when the user explicit calls resolveAll(DnsQuestion).

Modifications:

- Only filter duplicates if resolveAll(String) is used
- Add unit test

Result:

Fixes regressions introduces by https://github.com/netty/netty/pull/9021
2019-05-13 06:59:06 +02:00
SplotyCode 5a27f2f78b Allow to specify KeyStore type in SslContext (#9003)
Motivation:

As brought up in https://github.com/netty/netty/issues/8998, JKS can be substantially faster than pkcs12, JDK's new default. Without an option to set the KeyStore type you must change the configuration of the entire JVM which is impractical.

Modification:

- Allow to specify KeyStore type
- Add test case

Result:

Fixes https://github.com/netty/netty/issues/8998.
2019-05-10 07:29:14 +02:00
Norman Maurer df20a125aa
Allow to have DnsNameResolver.resolveAll(...) notify as soon as the preferred records were resolved (#9136)
Motivation:

075cf8c02e introduced a change to allow resolve(...) to notify as soon as the preferred record was resolved. This works great but we should also allow the user to configure that we want to do the same for resolveAll(...), which means we should be able to notify as soon as all records for a preferred record were resolved.

Modifications:

- Add a new DnsNameResolverBuilder method to allow configure this (use false as default to not change default behaviour)
- Add unit test

Result:

Be able to speed up resolving.
2019-05-09 08:06:52 +02:00
Andrey Mizurov a74fead216 Fixed HttpHelloWorldServerHandler for handling HTTP 1.0/1.1 (#9124)
Motivation:

HttpHelloWorldServer example works incorrect for HTTP 1.1, the value of header connection is always set to close for each request.

Modification:

Correctly set header

Result:

Fixed HttpHelloWorldServerHandler for handling HTTP 1.0/1.1
2019-05-08 09:04:51 +02:00
Anuraag Agrawal 526f2da912 Add equality check to contentEquals instance methods. (#9130)
Motivation:

An instance is always equal to itself. It makes sense to skip processing for this case, which isn't uncommon since `AsciiString` is often memoized within an application when used as HTTP header names.

Modification:

`contentEquals` methods first check for instance equality before doing processing.

Result:

`contentEquals` will be faster when comparing an instance with itself.

I couldn't find any unit tests for these methods, only the static version. Let me know if I should add something to `AsciiStringCharacterTest`.

Came up here:
https://github.com/line/armeria/pull/1731#discussion_r280396280
2019-05-08 07:30:34 +02:00
Norman Maurer 71c184076c Revert "KQueueEventLoop won't unregister active channels reusing a file descriptor (#9114)"
This reverts commit 909a3d942e.
2019-05-07 16:44:41 +02:00
Julien Viet 909a3d942e KQueueEventLoop won't unregister active channels reusing a file descriptor (#9114)
Motivation:

The current KQueueEventLoop implementation does not process concurrent domain socket channel registration/unregistration in the order they actual
happen since unregistration are delated by an event loop task scheduling. When a domain socket is closed, it's file descriptor might be reused
quickly and therefore trigger a new channel registration using the same descriptor.

Consequently the KQueueEventLoop#add(AbstractKQueueChannel) method will overwrite the current inactive channels having the same descriptor
and the delayed KQueueEventLoop#remove(AbstractKQueueChannel) will remove the active channel that replaced the inactive one.

As active channels are registered, events for this file descriptor won't be processed anymore and the channels will never be closed.

Modifications:

Change the logic of KQueueEventLoop#remove(AbstractKQueueChannel) channels so it will check channels equality prior removal.

Result:

KQueueEventLoop won't remove anymore active channels reusing a file descriptor.
2019-05-07 10:19:42 +02:00
Norman Maurer 66f6b959ff
Always include classes from all native transports no matter on which platfrom netty-all is build (#9111)
Motivation:

While building netty-all we should always include all classes for native transports no matter if the native part can be build or not. This was it is easier to test locally with a installed snapshot of netty-all when the code that uses it does enable a specific native transport depending on if the native bits can be loaded or not.

Modifications:

Always include classes of native transports no matter on which platfrom we build. When a release is done we ensure we include the native bits by using the uber-staging profile.

Result:

Easier testing with netty-all snapshots.
2019-04-30 23:23:48 +02:00
root ba06eafa1c [maven-release-plugin] prepare for next development iteration 2019-04-30 16:42:29 +00:00
root 49a451101c [maven-release-plugin] prepare release netty-4.1.36.Final 2019-04-30 16:41:28 +00:00
Norman Maurer 0c114dabed
Introduce DynamicAddressConnectHandler which can be used to dynamically change remoteAddress / localAddress when a connect is issued (#8982)
Motivation:

Bootstrap allows you to set a localAddress for outbound TCP connections, either via the Bootstrap.localAddress(localAddress) or Bootstrap.connect(remoteAddress, localAddress) methods. This works well if you want to bind to just one IP address on an interface. Sometimes you want to bind to a specific address based on the resolved remote address which should be possible.

Modifications:

Add DynamicAddressConnectHandler and tests

Result:

Fixes https://github.com/netty/netty/issues/8940.
2019-04-30 07:52:12 +02:00
Ilya Maykov c8ff76ba91 [openssl] fix refcount bug in OpenSslPrivateKeyMaterial ctor
Motivation:

Subclasses of `OpenSslKeyMaterial` implement `ReferenceCounted`. This means that a new object should have an initial refcount of 1. An `OpenSslPrivateKey.OpenSslPrivateKeyMaterial` object shares its refcount with the enclosing `OpenSslPrivateKey` object. This means the enclosing object's refcount must be incremented by 1 when an instance of `OpenSslPrivateKey.OpenSslPrivateKeyMaterial` is created. Otherwise, when the key material object is `release()`-ed, the refcount on the enclosing object will drop to 0 while it is still in use.

Modification:

- Increment the refcount in the constructor of `OpenSslPrivateKey.OpenSslPrivateKeyMaterial`
- Ensure we also always release the native certificates as well.

Result:

Refcount is now correct.
2019-04-29 23:11:18 +02:00
Divij Vaidya b9c4e17291 Invoke channelAcquired callback on first time channel acquire (#9093)
Motivation:

SimpleChannelPool provides ability to provide custom callbacks/handlers
on major events such as "channel acquired", "channel created" and
"channel released". In the current implementation, when a request to
acquire a channel is made for the first time, the internal channel pool
creates the channel lazily. This triggers the "channel created" callback
but does not invoke the "channel acquired" callback. This is contrary to
caller expectations who assumes that "channel acquired" will be invoked
at the end of every successful acquire call. It also leads to an
inconsistent API experience where the acquired callback is sometimes
invoked and sometimes it isn't depending on wheather the internal
mechanism is creating a new channel or re-using an existing one.

Modifications:

Invoke acquired callback consistenly even when creating a new channel
and modify the tests to support this behaviour

Result:

Consistent experience for the caller of acquire API. Every time they
call the API, the acquired callback will be invoked.
2019-04-29 20:45:49 +02:00
Norman Maurer 1837209a87
Http2MultiplexCodec.DefaultHttp2StreamChannel should handle ChannelConfig.isAutoClose() in a consistent way as AbstractChannel (#9108)
Motivation:

Http2MultiplexCodec.DefaultHttp2StreamChannel currently only act on ClosedChannelException exceptions when checking for isAutoClose(). We should widen the scope here to IOException to be more consistent with AbstractChannel.

Modifications:

Replace instanceof ClosedChannelException with instanceof IOException

Result:

More consistent handling of isAutoClose()
2019-04-29 18:50:22 +02:00
Norman Maurer 97617b254b
Adjust pom.xml to be able to build with graalvm (#9107)
Motivation:

When trying to use graalvm and build netty we currently fail because our build configuration is not compatible with it.

Modification:

- Skip plugins that are not supported when graal is used
- Correctly configure surefire plugin for graal so it not produces a NPE

Result:

We can build and test with graalvm.
2019-04-29 18:40:22 +02:00
Paulo Lopes f1495e1945 Add SVM metadata and minimal substitutions to build graalvm native image applications. (#8963)
Motivation:

GraalVM native images are a new way to deliver java applications. Netty is one of the most popular libraries however there are a few limitations that make it impossible to use with native images out of the box. Adding a few metadata (in specific modules will allow the compilation to success and produce working binaries)

Modification:

Added properties files in `META-INF` and substitutions classes (under `internal.svm`) will solve the compilation issues. The substitutions classes are not visible and do not have a public constructor so they are not visible to end users.

Result:

Fixes #8959 

This fix is very conservative as it applies the minimum config required to build:

* pure netty servers
* vert.x applications
* grpc applications

The build is having trouble due to checkstyle which does not seem to be able to find the copyright notice on property files.
2019-04-29 08:39:42 +02:00
Norman Maurer fb6f8f513a
Add docker-compose file to compile / test with graalvm (#9072)
Motivation:

We should try to compile / test with graalvm as well.

Modifications:

Add docker-compose file for graalvm

Result:

Be able to also compile / test with graalvm
2019-04-29 08:33:39 +02:00
Norman Maurer b5a2774502
Fix flaky GlobalEventExecutorTest.* (#9074)
Motivation:

In GlobalEventExecutorTest we used Thread.sleep(...) which can produce flaky results (as seen on the CI). We should use another alternative during tests.

Modifications:

Replace Thread.sleep(...) with join()

Result:

No more flaky GlobalEventExecutor tests.
2019-04-29 08:33:03 +02:00
Norman Maurer 2ec6428827
Update to latest java releases (#9101)
Motivation:

There were new releases of various Java versions.

Modifications:

Adjust used java versions of the latest releases and so use these on our CI

Result:

Use latest java versions on our CI.
2019-04-29 08:32:27 +02:00
Norman Maurer 3367a53d3b
Throw SignatureException if OpenSslPrivateKeyMethod.* return null to prevent segfault (#9100)
Motivation:

While OpenSslPrivateKeyMethod.* should never return null we should still guard against it to prevent any possible segfault.

Modifications:

- Throw SignatureException if null is returned
- Add unit test

Result:

No segfault when user returns null.
2019-04-29 08:31:14 +02:00
Scott Mitchell b4e3c12b8e
Http2ConnectionHandler to allow decoupling close(..) from GOAWAY graceful close (#9094)
Motivation:
Http2ConnectionHandler#close(..) always runs the GOAWAY and graceful close
logic. This coupling means that a user would have to override
Http2ConnectionHandler#close(..) to modify the behavior, and the
Http2FrameCodec and Http2MultiplexCodec are not extendable so you cannot
override at this layer. Ideally we can totally decouple the close(..) of the
transport and the GOAWAY graceful closure process completely, but to preserve
backwards compatibility we can add an opt-out option to decouple where the
application is responsible for sending a GOAWAY with error code equal to
NO_ERROR as described in https://tools.ietf.org/html/rfc7540#section-6.8 in
order to initiate graceful close.

Modifications:
- Http2ConnectionHandler supports an additional boolean constructor argument to
opt out of close(..) going through the graceful close path.
- Http2FrameCodecBuilder and Http2MultiplexCodec expose
 gracefulShutdownTimeoutMillis but do not hook them up properly. Since these
are already exposed we should hook them up and make sure the timeout is applied
properly.
- Http2ConnectionHandler's goAway(..) method from Http2LifecycleManager should
initiate the graceful closure process after writing a GOAWAY frame if the error
code is NO_ERROR. This means that writing a Http2GoAwayFrame from
Http2FrameCodec will initiate graceful close.

Result:
Http2ConnectionHandler#close(..) can now be decoupled from the graceful close
process, and immediately close the underlying transport if desired.
2019-04-28 17:48:04 -07:00
Nick Hill 00a9a25f29 Ensure channel handler close() is not skipped in !hasDisconnect case (#9098)
Motivation

The optimization in #8988 didn't correctly handle the specific case
where the channel hasDisconnect == false, and a
ChannelOutboundHandlerAdapter subclass overrides only the close(ctx,
promise) method without also overriding the disconnect(ctx, promise)
method.

Modifications

Adjust AbstractChannelHandler.disconnect(...) method to divert to
close(...) in !hasDisconnect case before computing target context for
the event.

Result

Fixes #9092
2019-04-28 10:41:51 +02:00
Scott Mitchell 2d33d1493e
DefaultHeaders#valueIterator doesn't remove from the in bucket list (#9090)
Motivation:
DefaultHeaders entries maintains two linked lists. 1 for overall insertion order
and 1 for "in bucket" order. DefaultHeaders#valueIterator removal (introduced in 1d9090aab2) only reliably
removes the entry from the overall insertion order, but may not remove from the
bucket unless the element is the first entry.

Modifications:
- DefaultHeaders$ValueIterator should track 2 elements behind the next entry so
that the single linked "in bucket" list can be patched up when removing the
previous entry.

Result:
More correct DefaultHeaders#valueIterator removal.
2019-04-27 11:32:50 -07:00
Scott Mitchell 2c12f09ec9
Http2FrameCodec to simulate GOAWAY received when stream IDs are exhausted (#9095)
Motivation:
Http2FrameCodec currently fails the write promise associated with creating a
stream with a Http2NoMoreStreamIdsException. However this means the user code
will have to listen to all write futures in order to catch this scenario which
is the same as receiving a GOAWAY frame. We can also simulate receiving a GOAWAY
frame from our remote peer and that allows users to consolidate graceful close
logic in the GOAWAY processing.

Modifications:
- Http2FrameCodec should simulate a DefaultHttp2GoAwayFrame when trying to
create a stream but the stream IDs have been exhausted.

Result:
Applications can rely upon GOAWAY for graceful close processing instead of also
processing write futures.
2019-04-27 10:55:43 -07:00
Scott Mitchell ec62af01c7 DefaultHttp2ConnectionEncoder async SETTINGS ACK SimpleChannelPromiseAggregator promise usage
Motivaiton:
DefaultHttp2ConnectionEncoder uses SimpleChannelPromiseAggregator to combine two
operations into a single future status. However it directly uses the
SimpleChannelPromiseAggregator object instead of using the newPromise() method
in one case. This may result in premature completion of the aggregated future.

Modifications:
- DefaultHttp2ConnectionEncoder to use
  SimpleChannelPromiseAggregator#newPromise() instead of directly using the
SimpleChannelPromiseAggregator instance when writing the settings ACK frame

Result:
More correct status for the SETTING ACK frame writing when auto settings ACK is
disabled.
2019-04-25 16:26:08 -07:00
Scott Mitchell b3dba317d7
HTTP/2 to support asynchronous SETTINGS ACK (#9069)
Motivation:
The HTTP/2 codec will synchronously respond to a SETTINGS frame with a SETTINGS
ACK before the application sees the SETTINGS frame. The application may need to
adjust its state depending upon what is in the SETTINGS frame before applying
the remote settings and responding with an ACK (e.g. to adjust for max
concurrent streams). In order to accomplish this the HTTP/2 codec should allow
for the application to opt-in to sending the SETTINGS ACK.

Modifications:
- DefaultHttp2ConnectionDecoder should support a mode where SETTINGS frames can
  be queued instead of immediately applying and ACKing.
- DefaultHttp2ConnectionEncoder should attempt to poll from the queue (if it
  exists) to apply the earliest received but not yet ACKed SETTINGS frame.
- AbstractHttp2ConnectionHandlerBuilder (and sub classes) should support a new
  option to enable the application to opt-in to managing SETTINGS ACK.

Result:
HTTP/2 allows for asynchronous SETTINGS ACK managed by the application.
2019-04-25 15:52:05 -07:00
Scott Mitchell 3579165d72 SmtpRequestEncoderTest ByteBuf leak (#9075)
Motivation:
SmtpRequestEncoderTest#testThrowsIfContentExpected has a ByteBuf leak.

Modifications:
- SmtpRequestEncoderTest#testThrowsIfContentExpected should release buffers in a finally block

Result:
No more leaks in SmtpRequestEncoderTest#testThrowsIfContentExpected.
2019-04-19 08:47:02 +02:00
Nick Hill 6248b2492b Remove static wildcard imports in EpollDomainSocketChannelConfig (#9066)
Motivation

These aren't needed, only one field from each class is used. It also showed as an ambiguous identifier compilation error in my IDE even though javac is obviously fine with it.

Modifications

Static-import explicit ChannelOption fields in EpollDomainSocketChannelConfig instead of using .* wildcard.

Result

Cleaner / more consistent code.
2019-04-18 07:33:44 +02:00
Norman Maurer e01c4bce08
Fix regression in CompositeByteBuf.discard*ReadBytes() (#9068)
Motivation:

1f93bd3 introduced a regression that could lead to not have the lastAccessed field correctly null'ed out when the endOffset of the internal Component == CompositeByteBuf.readerIndex()

Modifications:

- Correctly null out the lastAccessed field in any case
- Add unit tests

Result:

Fixes regression in CompositeByteBuf.discard*ReadBytes()
2019-04-17 18:03:08 +02:00
root baab215f66 [maven-release-plugin] prepare for next development iteration 2019-04-17 07:26:24 +00:00
root dfe657e2d4 [maven-release-plugin] prepare release netty-4.1.35.Final 2019-04-17 07:25:40 +00:00
Norman Maurer 3ebd29f9c7
Only try to use OpenSslX509TrustManagerWrapper when using Java 7+ (#9065)
Motivation:

We should only try to use OpenSslX509TrustManagerWrapper when using Java 7+ as otherwise it fail to init in it's static block as X509ExtendedTrustManager was only introduced in Java7

Modifications:

Only call OpenSslX509TrustManagerWrapper if we use Java7+

Result:

Fixes https://github.com/netty/netty/issues/9064.
2019-04-17 08:16:55 +02:00
Scott Mitchell 1d9090aab2 DefaultHeaders#valueIterator to support removal (#9063)
Motivation:
While iterating values it is often desirable to be able to remove individual
entries. The existing mechanism to do this involves removal of all entries and
conditional re-insertion which is heavy weight in order to remove a single
value.

Modifications:
- DefaultHeaders$ValueIterator supports removal

Result:
It is possible to remove entries while iterating the values in DefaultHeaders.
2019-04-16 19:37:34 +02:00
Nick Hill 9ed41db1d7 Have (Epoll|KQueue)RecvByteAllocatorHandle extend DelegatingHandle (#9060)
Motivation

These implementations delegate most of their methods to an existing Handle and previously extended RecvByteBufAllocator.DelegatingHandle. This was reverted in #6322 with the introduction of ExtendedHandle but it's not clear to me why it needed to be - the code looks a lot cleaner.

Modifications

Have (Epoll|KQueue)RecvByteAllocatorHandle extend DelegatingHandle again, while still implementing ExtendedHandle.

Result

Less code.
2019-04-16 09:14:09 +02:00
Norman Maurer 075cf8c02e
DnsNameResolver.resolve(...) should notify future as soon as one preferred record was resolved (#9050)
Motivation:

At the moment resolve(...) does just delegate to resolveAll(...) and so will only notify the future once all records were resolved. This is wasteful as we are only interested in the first record anyway. We should notify the promise as soon as one record that matches the preferred record type is resolved.

Modifications:

- Introduce DnsResolveContext.isCompleteEarly(...) to be able to detect once we should early notify the promise.
- Make use of this early detecting if resolve(...) is called
- Remove FutureListener which could lead to IllegalReferenceCountException due double releases
- add unit test

Result:

Be able to notify about resolved host more quickly.
2019-04-15 21:42:04 +02:00
Norman Maurer 4b36a5b08b
Correctly calculate ttl for AuthoritativeNameServer when update existing records (#9051)
Motivation:

We did not correctly calculate the new ttl as we did forget to add `this.`

Modifications:

Add .this and so correctly calculate the TTL

Result:

Use correct TTL for authoritative nameservers when updating these.
2019-04-15 21:41:04 +02:00
Norman Maurer 741bcd485d
Make Multicast tests more robust (#9053)
Motivation:

86dd388637 reverted the usage of IPv6 Multicast test. This commit makes the whole multicast testing a lot more robust by selecting the correct interface in any case and also reverts the `@Ignore`

Modifications:

- More robust multicast testing by selecting the right NetworkInterface
- Remove the `@Ignore` again for the IPv6 test

Result:

More robust multicast testing
2019-04-15 21:39:31 +02:00
Francesco Nigro fb50847e39 The benchmark is not taking into account nanoTime granularity (#9033)
Motivation:

Results are just wrong for small delays.

Modifications:

Switching to AvarageTime avoid to rely on OS nanoTime granularity.

Result:

Uncontended low delay results are not reliable
2019-04-15 15:14:36 +02:00
BELUGABEHR 09faa72296 Use ArrayDeque instead of LinkedList (#9046)
Motivation:
Prefer ArrayDeque to LinkedList because latter will produce more GC.

Modification:
- Replace LinkedList with ArrayDeque

Result:
Less GC
2019-04-15 15:13:22 +02:00
Norman Maurer dde3f561bc
Use ResolvedAddressTypes.IPV4_ONLY in DnsNameResolver by default if n… (#9048)
Motivation:

To closely mimic what the JDK does we should not try to resolve AAAA records if the system itself does not support IPv6 at all as it is impossible to connect to this addresses later on. In this case we need to use ResolvedAddressTypes.IPV4_ONLY.

Modifications:

Add static method to detect if IPv6 is supported and if not use ResolvedAddressTypes.IPV4_ONLY.

Result:

More consistent behaviour between JDK and our resolver implementation.
2019-04-15 13:07:05 +02:00
Norman Maurer 26cd59c328
DnsNameResolver.resolveAll(...) should also contain non preferred addresses (#9044)
Motivation:

At the moment we basically drop all non prefered addresses when calling DnsNameResolver.resolveAll(...). This is just incorrect and was introduced by 4cd39cc4b3. More correct is to still retain these but sort the returned List to have the prefered addresses on the beginning of the List. This also ensures resolve(...) will return the correct return type.

Modifications:

- Introduce PreferredAddressTypeComperator which we use to sort the List so it will contain the preferred address type first.
- Add unit test to verify behaviour

Result:

Include not only preferred addresses in the List that is returned by resolveAll(...)
2019-04-15 10:19:54 +02:00
Norman Maurer 34aa2c841c
Don't use sun.misc.Unsafe when IKVM.NET is used (#9042)
Motivation:

IKVM.NET seems to ship a bug sun.misc.Unsafe class, for this reason we should better disable our sun.misc.Unsafe usage when we detect IKVM.NET is used.

Modifications:

Check if IKVM.NET is used and if so do not use sun.misc.Unsafe by default.

Result:

Fixes https://github.com/netty/netty/issues/9035 and https://github.com/netty/netty/issues/8916.
2019-04-12 22:41:53 +02:00
Norman Maurer 48edf40861
Make validation tools more happy by not have TrustManager impl just accept (#9041)
Motivation:

Seems like some analyzer / validation tools scan code to detect if it may produce some security risk because of just blindly accept certificates. Such a tool did tag our code because we have such an implementation (which then is actually never be used). We should just change the impl to not do this as it does not matter for us and it makes such tools happier.

Modifications:

Throw CertificateException

Result:

Fixes https://github.com/netty/netty/issues/9032
2019-04-12 21:36:57 +02:00
Norman Maurer 86dd388637 Revert "Added UDP multicast (with caveats: no ipv6, getInterface, getNetworkI… (#9006)"
This reverts commit a3e8c86741 as there are some issues that need to be fixed first.
2019-04-12 21:32:22 +02:00
Daniel Anderson bedc8a6ea5 Documentation update to MessageSizeEstimator (#9034)
Motivation:

Did not understand the context of "ca".

Modification:

Clarified "CA" to "approximately".

Result:

Fixes #9031
2019-04-12 19:16:23 +02:00
Norman Maurer 0c79fc8b63
Update to netty-tcnative 2.0.25.Final to fix possible segfault when openssl < 1.0.2 and gcc is used. (#9038)
Motivation:

We should update to netty-tcnative 2.0.25.Final as it fixes a possible segfault on systems that use openssl < 1.0.2 and for which we compiled with gcc.

See https://github.com/netty/netty-tcnative/pull/457

Modifications:

Update netty-tcnative

Result:

No more segfault possible.
2019-04-12 19:15:02 +02:00
Norman Maurer 9e491dda14 Ignore ipv6 multicast test that was added in 778ff2057e for now
Motivation:

The multicast ipv6 test fails on some systems. As I just added it let me ignore it for now while investigating.

Modifications:

Add @ignore

Result:

Stable testsuite while investigate
2019-04-12 16:56:44 +02:00
Norman Maurer fcfa9eb9a8
Throw IOException (not ChannelException) if netty_epoll_linuxsocket_setTcpMd5Sig fails (#9039)
Motivation:

At the moment we throw a ChannelException if netty_epoll_linuxsocket_setTcpMd5Sig fails. This is inconsistent with other methods which throw a IOException.

Modifications:

Throw IOException

Result:

More correct and consistent exception usage in epoll transport
2019-04-12 15:15:27 +02:00
Norman Maurer 778ff2057e
Add IPv6 multicast test to testsuite (#9037)
Motivation:

We currently only cover ipv4 multicast in the testsuite but we should also have tests for ipv6.

Modifications:

- Add test for ipv6
- Ensure we only try to run multicast test for ipv4 / ipv6 if the loopback interface supports it.

Result:

Better test coverage
2019-04-12 12:29:08 +02:00
Norman Maurer 6ed203b7ba
NioServerSocketChannel.isActive() must return false after close() completes. (#9030)
Motivation:

When a Channel was closed its isActive() method must return false.

Modifications:

First check for isOpen() before isBound() as isBound() will continue to return true even after the underyling fd was closed.

Result:

Fixes https://github.com/netty/netty/issues/9026.
2019-04-11 18:54:31 +02:00
Norman Maurer 6278d09139
netty_epoll_linuxsocket_setTcpMd5Sig should throw ChannelException when not able to init sockaddr (#9029)
Motivation:

When netty_epoll_linuxsocket_setTcpMd5Sig fails to init the sockaddr we should throw an exception and not silently return.

Modifications:

Throw exception if init of sockaddr fails.

Result:

Correctly report back error to user.
2019-04-11 18:50:27 +02:00
Norman Maurer 45b0daf9e6
netty_epoll_linuxsocket_setTcpMd5Sig should throw ChannelException when not able to init sockaddr (#9029)
Motivation:

When netty_epoll_linuxsocket_setTcpMd5Sig fails to init the sockaddr we should throw an exception and not silently return.

Modifications:

Throw exception if init of sockaddr fails.

Result:

Correctly report back error to user.
2019-04-11 18:50:16 +02:00
Oleksii Kachaiev ee351ef8bc WebSocket client handshaker to support "force close" after timeout (#8896)
Motivation:

RFC 6455 defines that, generally, a WebSocket client should not close a TCP
connection as far as a server is the one who's responsible for doing that.
In practice tho', it's not always possible to control the server. Server's
misbehavior may lead to connections being leaked (if the server does not
comply with the RFC).

RFC 6455 #7.1.1 says

> In abnormal cases (such as not having received a TCP Close from the server
after a reasonable amount of time) a client MAY initiate the TCP Close.

Modifications:

* WebSocket client handshaker additional param `forceCloseAfterMillis`

* Use 10 seconds as default

Result:

WebSocket client handshaker to comply with RFC. Fixes #8883.
2019-04-10 15:25:34 +02:00
Scott Mitchell ac023da16d Correctly handle overflow in Native.kevent(...) when EINTR is detected (#9024)
Motivation:
When kevent(...) returns with EINTR we do not correctly decrement the timespec
structure contents to account for the time duration. This may lead to negative
values for tv_nsec which will result in an EINVAL and raise an IOException to
the event loop selection loop.

Modifications:
Correctly calculate new timeoutTs when EINTR is detected

Result:
Fixes #9013.
2019-04-10 11:04:13 +02:00
Norman Maurer c0d3444f6d
DnsNameResolver should log in trace level if notification of the promise fails (#9022)
Motivation:

During investigating some other bug I noticed that we log with warn level if we fail to notify the promise due the fact that it is already full-filled. This is not correct and missleading as there is nothing wrong with it in general. A promise may already been fullfilled because we did multiple queries and one of these was successful.

Modifications:

- Change log level to trace
- Add unit test which before did log with warn level but now does with trace level.

Result:

Less missleading noise in the log.
2019-04-10 07:13:53 +02:00
秦世成 51112e2b36 Avoid IdleStateHandler triggering unexpected idle events when flushing large entries to slow clients (#9020)
Motivation:

IdleStateHandler may trigger unexpected idle events when flushing large entries to slow clients.

Modification:

In netty design, we check the identity hash code and total pending write bytes of the current flush entry to determine whether there is a change in output. But if a large entry has been flushing slowly (for some reason, the network speed is slow, or the client processing speed is too slow to cause the TCP sliding window to be zero), the total pending write bytes size and identity hash code would remain unchanged.

Avoid this issue by adding checks for the current entry flush progress.

Result:

Fixes #8912 .
2019-04-09 16:26:27 +02:00
Nick Hill b26a61acd1 Centralize internal reference counting logic (#8614)
Motivation

AbstractReferenceCounted and AbstractReferenceCountedByteBuf contain
duplicate logic for managing the volatile refcount in an optimized and
consistent manner, which increased in complexity in #8583. It's possible
to extract this into a common helper class now that all access is via an
AtomicIntegerFieldUpdater.

Modifications

- Move duplicate logic into a shared ReferenceCountUpdater class
- Incorporate some additional simplification for the most common single
increment/decrement cases (fewer checks/operations)

Result

Less code duplication, better encapsulation of the "non-trivial"
internal volatile refcount manipulation
2019-04-09 16:22:32 +02:00
Norman Maurer e63c596f24
DnsNameResolver.resolveAll(...) should not include duplicates (#9021)
Motivation:

DnsNameResolver#resolveAll(String) may return duplicate results in the event that the original hostname DNS response includes an IP address X and a CNAME that ends up resolving the same IP address X. This behavior is inconsistent with the JDK’s resolver and is unexpected to retrun a List with duplicate entries from a resolveAll(..) call.

Modifications:

- Filter out duplicates
- Add unit test

Result:

More consistent and less suprising behavior
2019-04-09 09:44:23 +02:00
Norman Maurer 8f7ef1cabb
Skip execution of Channel*Handler method if annotated with @Skip and … (#8988)
Motivation:

Invoking ChannelHandlers is not free and can result in some overhead when the ChannelPipeline becomes very long. This is especially true if most handlers will just forward the call to the next handler in the pipeline. When the user extends Channel*HandlerAdapter we can easily detect if can just skip the handler and invoke the next handler in the pipeline directly. This reduce the overhead of dispatch but also reduce the call-stack in many cases.

This backports https://github.com/netty/netty/pull/8723 and https://github.com/netty/netty/pull/8987 to 4.1

Modifications:

Detect if we can skip the handler when walking the pipeline.

Result:

Reduce overhead for long pipelines.

Benchmark                                       (extraHandlers)   Mode  Cnt       Score      Error  Units
DefaultChannelPipelineBenchmark.propagateEventOld             4  thrpt   10  267313.031 ± 9131.140  ops/s
DefaultChannelPipelineBenchmark.propagateEvent                4  thrpt   10  824825.673 ± 12727.594  ops/s
2019-04-09 09:36:52 +02:00
Norman Maurer c3c05e8570 Fix NPE in OpenSslPrivateKeyMethodTest.destroy() when BoringSSL is not used
Motivation:

4079189f6b introduced OpenSslPrivateKeyMethodTest which will only be run when BoringSSL is used. As the assumeTrue(...) also guards the init of the static fields we need to ensure we only try to destroy these if BoringSSL is used as otherwise it will produce a NPE.

Modifications:

Check if BoringSSL is used before trying to destroy the resources.

Result:

No more NPE when BoringSSL is not used.
2019-04-09 08:31:39 +02:00
Norman Maurer ec21e575d7
Correctly discard messages after oversized message is detected. (#9015)
Motivation:

32563bfcc1 introduced a regression in which we did now not longer discard the messages after we handled an oversized message.

Modifications:

- Do not set aggregating to false after handleOversizedMessage is called
- Adjust unit tests to verify the behaviour is correct again.

Result:

Fixes https://github.com/netty/netty/issues/9007.
2019-04-08 21:09:06 +02:00
Nick Hill 9f2221ebd4 CompositeByteBuf optimizations and new addFlattenedComponents method (#8939)
Motivation:

The CompositeByteBuf discardReadBytes / discardReadComponents methods are currently quite inefficient, including when there are no read components to discard. We would like to call the latter more frequently in ByteToMessageDecoder#COMPOSITE_CUMULATOR.

In the same context it would be beneficial to perform a "shallow copy" of a composite buffer (for example when it has a refcount > 1) to avoid having to allocate and copy the contained bytes just to obtain an "independent" cumulation.

Modifications:

- Optimize discardReadBytes() and discardReadComponents() implementations (start at first comp rather than performing a binary search for the readerIndex).
- New addFlattenedComponents(boolean,ByteBuf) method which performs a shallow copy if the provided buffer is also composite and avoids adding any empty buffers, plus unit test.
- Other minor optimizations to avoid unnecessary checks.

Results:

discardReadXX methods are faster, composite buffers can be easily appended without deepening the buffer "tree" or retaining unused components.
2019-04-08 20:48:08 +02:00
Norman Maurer 188f5364db Revert back to depend on netty-tcnative
Motivation:

4079189f6b changed the dependency to netty-tcnative-borinssl-static but it should still be netty-tcnative.

Modifications:

Change back to netty-tcnative

Result:

Correct dependency is used
2019-04-08 20:27:05 +02:00
Norman Maurer 4079189f6b
Allow to offload / customize key signing operations when using BoringSSL. (#8943)
Motivation:

BoringSSL allows to customize the way how key signing is done an even offload it from the IO thread. We should provide a way to plugin an own implementation when BoringSSL is used.

Modifications:

- Introduce OpenSslPrivateKeyMethod that can be used by the user to implement custom signing by using ReferenceCountedOpenSslContext.setPrivateKeyMethod(...)
- Introduce static methods to OpenSslKeyManagerFactory which allows to create a KeyManagerFactory which supports to do keyless operations by let the use handle everything in OpenSslPrivateKeyMethod.
- Add testcase which verifies that everything works as expected

Result:

A user is able to customize the way how keys are signed.
2019-04-08 20:17:44 +02:00
Steve Buzzard a3e8c86741 Added UDP multicast (with caveats: no ipv6, getInterface, getNetworkI… (#9006)
…nterface, block or loopback-mode-disabled operations).


Motivation:

Provide epoll/native multicast to support high load multicast users (we are using it for a high load telecomm app at my day job).

Modification:

Added support for (ipv4 only) source specific and any source multicast for epoll transport. Some caveats (beyond no ipv6 support initially - there’s a bit of work to add in join and leave group specifically around SSM, as ipv6 uses different data structures for this): no support for disabling loop back mode, retrieval of interface and block operation, all of which tend to be less frequently used.

Result:

Provides epoll transport multicast for IPv4 for common use cases. Understand if you’d prefer to hold off until ipv6 is included but not sure when I’ll be able to get to that.
2019-04-08 20:13:39 +02:00
Farid Zakaria 4373a1fba2 Increase default bits for SelfSignedCertificate (#9019)
Motivation:
During OpenSsl.java initialization, a SelfSignedCertificate is created
during the static initialization block to determine if OpenSsl
can be used.

The default key strength for SelfSignedCertificate was too low if FIPS
mode is used and BouncyCastle-FIPS is the only available provider
(necessary for compliance). A simple fix is to just augment the key
strength to the minimum required about by FIPS.

Modification:
Set default key bit length to 2048 but also allow it to be dynamically set via a system property for future proofing to more stricter security compliance.

Result:
Fixes #9018

Signed-off-by: Farid Zakaria <farid.m.zakaria@gmail.com>
2019-04-08 20:08:59 +02:00
Norman Maurer 4b83be1ceb
We should fail fast if the given PrivateKey or X509Certificate chain is not supported by the used SslProvider. (#9009)
Motivation:

Some SslProvider do support different types of keys and chains. We should fail fast if we can not support the type.

Related to https://github.com/netty/netty-tcnative/issues/455.

Modifications:

- Try to parse key / chain first and if if this fails throw and SslException
- Add tests.

Result:

Fail fast.
2019-04-08 15:20:14 +02:00
Norman Maurer 60d135f0c8
Deprecate ChannelOption.newInstance(...) (#8997)
Motivation:

Deprecate ChannelOption.newInstance(...) as it is not used.

Modifications:

Deprecate ChannelOption.newInstance(...) as valueOf(...) should be used as a replacement.

Result:

Fixes https://github.com/netty/netty/issues/8983.
2019-04-05 12:09:54 +02:00
Norman Maurer 547a375737
Always include initial handshake exception when throwing SslHandshakeException (#9008)
Motivation:

A callback may already have stored a initial handshake exception in ReferenceCountedOpenSslEngine so we should include it when throwing a SslHandshakeException to ensure the user has all the infos when debugging.

Modifications:

Include initial handshake exception

Result:

Include all erros when throwing the SslHandshakeException.
2019-04-05 09:55:32 +02:00
Norman Maurer ad928c19eb
Mark flaky test as @Ignore (#9010)
Motivation:

0a0da67f43 introduced a testcase which is flacky. We need to fix it and enable it again.

Modifications:

Mark flaky test as ignore.

Result:

No flaky build anymore.
2019-04-04 21:05:36 +02:00
Oleksii Kachaiev 52411233d3 Carefully manage Keep-Alive/Close connection headers in all examples (#8966)
Motivation:

"Connection: close" header should be specified each time we're going
to close an underlying TCP connection when sending HTTP/1.1 reply.

Modifications:

Introduces changes made in #8914 for the following examples:

* WebSocket index page and WebSocket server handler
* HelloWorld server
* SPDY server handler
* HTTP/1.1 server handler from HTTP/2 HelloWorld example
* HTTP/1.1 server handler from tiles example

Result:

Keep-Alive connections management conforms with RFCs.
2019-04-02 21:10:11 +02:00
Norman Maurer 20042b6522
Add @SupressWarnings("deprecation") to ChannelInboundHandlerAdapter and clarify deprecation in ChannelHandler (#9001)
Motivation:

https://github.com/netty/netty/pull/8826 added @Deprecated to the exceptionCaught(...) method but we missed to add @SupressWarnings(...) to it's sub-types. Beside this we can make the deprecated docs a bit more clear.

Modifications:

- Add @SupressWarnings("deprecated")
- Clarify docs.

Result:

Less warnings and more clear deprecated docs.
2019-04-02 20:52:06 +02:00
Norman Maurer f8c89e2e05
Remove call to SSL.setHostNameValidation(...) as it is done in the TrustManager (#8981)
Motivation:

We do not need to call SSL.setHostNameValidation(...) as it should be done as part of the TrustManager implementation. This is consistent with the JDK implementation of SSLEngine.

Modifications:

Remove call to SSL.setHostNameValidation(...)

Result:

More consistent behaviour between our SSLEngine implementation and the one that comes with the JDK.
2019-04-01 21:02:36 +02:00
Norman Maurer a2b85a306d
Fix NPE that was encounter by debugger (will never happen in real code). (#8992)
Motivation:

We synchronize on the chunk.arena when produce the String returned by PoolSubpage.toString() which may raise a NPE when chunk == null. Chunk == null for the head of the linked-list and so a NPE may raised by a debugger. This NPE can never happen in real code tho as we never access toString() of the head.

Modifications:

Add null checks and so fix the possible NPE

Result:

No NPE when using a debugger and inspect the PooledByteBufAllocator.
2019-04-01 19:44:28 +02:00
Norman Maurer f7359aa742
Use SSL.setKeyMaterial(...) to test if the KeyManagerFactory is supported (#8985)
Motivation:

We use SSL.setKeyMaterial(...) in our implementation when using the KeyManagerFactory so we should also use it to detect if we can support KeyManagerFactory.

Modifications:

Use SSL.setKeyMaterial(...) as replacement for SSL.setCertificateBio(...)

Result:

Use the same method call to detect if KeyManagerFactory can be supported as we use in the real implementation.
2019-04-01 12:03:05 +02:00
Norman Maurer e7c427c714
Update to latest openjdk13 EA release (#8990)
Motivation:

A new openjdk13 EA release is out.

Modifications:

Update openjdk13 version.

Result:

Run build on CI with latest openjdk13 EA build
2019-03-30 20:29:09 +01:00
Vladimir Kostyukov 0a0da67f43 Introduce SingleThreadEventLoop.registeredChannels (#8428)
Motivation:

Systems depending on Netty may benefit (telemetry, alternative even loop scheduling algorithms) from knowing the number of channels assigned to each EventLoop.

Modification:

Expose the number of channels registered in the EventLoop via SingleThreadEventLoop.registeredChannels.

Result:

Fixes #8276.
2019-03-28 11:33:12 +00:00
Norman Maurer 8206604003
Upgrade to new netty-build and com.puppycrawl.tools 8.18 (#8980)
Motivation:

com.puppycrawl.tools checkstyle < 8.18 was reported to contain a possible security flaw. We should upgrade.

Modifications:

- Upgrade netty-build and checkstyle.
- Fix checkstyle errors

Result:

Fixes https://github.com/netty/netty/issues/8968.
2019-03-26 14:21:34 +01:00
Norman Maurer 86ecad517c
Consolidate creation of SslHandshakeException when caused by a callback that is used in the native SSL implementation. (#8979)
Motivation:

We have multiple places where we store the exception that was produced by a callback in ReferenceCountedOpenSslEngine, and so have a lot of code-duplication.

Modifications:

- Consolidate code into a package-private method that is called from the callbacks if needed

Result:

Less code-duplication and cleaner code.
2019-03-26 11:38:37 +01:00
Norman Maurer bb1e038198
Cleanup example to use local variable. (#8976)
Motivation:

We can just use a local variable in HttpUploadServerHandler and so make the example code a bit cleaner.

Modifications:

Use local variable.

Result:

Fixes https://github.com/netty/netty/issues/8892.
2019-03-25 20:54:57 +01:00
Norman Maurer 41b0236815
Allow to offload certificate validation when using BoringSSL (#8974)
Motivation:

BoringSSL supports offloading certificate validation to a different thread. This is useful as it may need to do blocking operations and so may block the EventLoop.

Modification:

- Adjust ReferenceCountedOpenSslEngine to correctly handle offloaded certificate validation (just as we already have code for certificate selection).

Result:

Be able to offload certificate validation when using BoringSSL.
2019-03-24 20:03:30 +01:00
Norman Maurer 33e2f5609d Revert "Allow to offload certificate validation when using BoringSSL (#8908)"
This reverts commit 316dd98284.
2019-03-24 09:33:42 +01:00
Norman Maurer 316dd98284
Allow to offload certificate validation when using BoringSSL (#8908)
Motivation:

BoringSSL supports offloading certificate validation to a different thread. This is useful as it may need to do blocking operations and so may block the EventLoop.

Modification:

- Adjust ReferenceCountedOpenSslEngine to correctly handle offloaded certificate validation (just as we already have code for certificate selection).

Result:

Be able to offload certificate validation when using BoringSSL.
2019-03-24 09:03:27 +01:00
Norman Maurer 33128c85f8
Add SSLEngineTest to ensure Signature Algorithms are present during KeyManager calls. (#8965)
Motivation:

We had a bug which could case ExtendedSSLSession.getPeerSupportedSignatureAlgorithms() return an empty array when using BoringSSL. This testcase verifies we correctly return algorithms after the fix in https://github.com/netty/netty-tcnative/pull/449.

Modifications:

Add testcase to verify behaviour.

Result:

Ensure we correctly retuen the algorithms.
2019-03-24 07:35:03 +01:00
Norman Maurer de551dfef0
Also use adoptjdk builds when using docker-sync (#8971)
Motivation:

We recently changed the docker config to use adoptjdk builds but missed to include the docker-sync related files.

Modifications:

Use adoptjdk there as well.

Result:

More conistent usage of JDK versions.
2019-03-23 17:12:44 +01:00
Norman Maurer 1ca37a0edb
Correctly detect exeception cause when using BoringSSL in SslErrorTest (#8970)
Motivation:

e9ce5048df added a testcase to ensure we correctly send the alert in all cases but did use a too strict message matching which did not work for BoringSSL as it not uses whitespaces but underscores.

Modifications:

Make the message matching less strict.

Result:

Test pass also when using BoringSSL.
2019-03-22 16:30:53 +01:00
Norman Maurer 78c02aa033
Update to latest JDK releases in our CI (#8969)
Motivation:

We should use the latest JDK release on our CI

Modifications:

Update all versions.

Result:

Test on latest JDK versions on our CI
2019-03-22 15:22:47 +01:00
Andrey Mizurov fc6e668186 Add user possibility to skip the evaluation of a certain websocket ex… (#8910)
Motivation:

Add user possibility to skip the evaluation of certain web socket extension,
for example we can skip compression extension for messages that already compressed or very small and etc.

Modification:

This pull request is related with #5669

Result:

User can set to WebSocketClientExtensionHandshaker or WebSocketServerExtensionHandshaker a filter to skip the evaluation of certain extension.
2019-03-22 14:48:22 +01:00
Norman Maurer 922e463524
Don't try to put back MemoryRegionCache.Entry objects into the Recycler when recycled because of a finalizer. (#8955)
Motivation:

In MemoryRegionCache.Entry we use the Recycler to reduce GC pressure and churn. The problem is that these will also be recycled when the PoolThreadCache is collected and finalize() is called. This then can have the effect that we try to load class but the WebApp is already stoped.

This will produce an stacktrace like this on Tomcat:

```
19-Mar-2019 15:53:21.351 INFO [Finalizer] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [java.util.WeakHashMap]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
 java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [java.util.WeakHashMap]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
	at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1383)
	at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1371)
	at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1224)
	at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
	at io.netty.util.Recycler$3.initialValue(Recycler.java:233)
	at io.netty.util.Recycler$3.initialValue(Recycler.java:230)
	at io.netty.util.concurrent.FastThreadLocal.initialize(FastThreadLocal.java:188)
	at io.netty.util.concurrent.FastThreadLocal.get(FastThreadLocal.java:142)
	at io.netty.util.Recycler$Stack.pushLater(Recycler.java:624)
	at io.netty.util.Recycler$Stack.push(Recycler.java:597)
	at io.netty.util.Recycler$DefaultHandle.recycle(Recycler.java:225)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry.recycle(PoolThreadCache.java:478)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:459)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:430)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:422)
	at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:279)
	at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:270)
	at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:241)
	at io.netty.buffer.PoolThreadCache.finalize(PoolThreadCache.java:230)
	at java.lang.System$2.invokeFinalize(System.java:1270)
	at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:102)
	at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
	at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:217)
```

Beside this we also need to ensure we not try to lazy load SizeClass when the finalizer is used as it may not be present anymore if the ClassLoader is already destroyed.

This would produce an error like:

```
20-Mar-2019 11:26:35.254 INFO [Finalizer] org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading Illegal access: this web application instance has been stopped already. Could not load [io.netty.buffer.PoolArena$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
 java.lang.IllegalStateException: Illegal access: this web application instance has been stopped already. Could not load [io.netty.buffer.PoolArena$1]. The following stack trace is thrown for debugging purposes as well as to attempt to terminate the thread which caused the illegal access.
	at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForResourceLoading(WebappClassLoaderBase.java:1383)
	at org.apache.catalina.loader.WebappClassLoaderBase.checkStateForClassLoading(WebappClassLoaderBase.java:1371)
	at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1224)
	at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1186)
	at io.netty.buffer.PoolArena.freeChunk(PoolArena.java:287)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.freeEntry(PoolThreadCache.java:464)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:429)
	at io.netty.buffer.PoolThreadCache$MemoryRegionCache.free(PoolThreadCache.java:421)
	at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:278)
	at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:269)
	at io.netty.buffer.PoolThreadCache.free(PoolThreadCache.java:240)
	at io.netty.buffer.PoolThreadCache.finalize(PoolThreadCache.java:229)
	at java.lang.System$2.invokeFinalize(System.java:1270)
	at java.lang.ref.Finalizer.runFinalizer(Finalizer.java:102)
	at java.lang.ref.Finalizer.access$100(Finalizer.java:34)
	at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:217)
```

Modifications:

- Only try to put the Entry back into the Recycler if the PoolThredCache is not destroyed because of the finalizer.
- Only try to access SizeClass if not triggered by finalizer.

Result:

No IllegalStateException anymoe when a webapp is reloaded in Tomcat that uses netty and uses the PooledByteBufAllocator.
2019-03-22 12:16:21 +01:00
Nick Hill b36f75044f Fix possible ByteBuf leak when CompositeByteBuf is resized (#8946)
Motivation:

The special case fixed in #8497 also requires that we keep a derived slice when trimming components in place, as done by the capacity(int) and discardReadBytes() methods.

Modifications:

Ensure that we keep a ref to trimmed components' original retained slice in capacity(int) and discardReadBytes() methods, so that it is released properly when the they are later freed. Add unit test which fails prior to the fix.

Result:

Edge case leak is eliminated.
2019-03-22 11:18:10 +01:00
Norman Maurer c83904a12a
Allow to automatically trim the PoolThreadCache in a timely interval (#8941)
Motivation:

PooledByteBufAllocator uses a PoolThreadCache per Thread that allocates / deallocates to minimize the performance overhead. This PoolThreadCache is trimmed after X allocations to free up buffers that are not allocated for a long time. This works out quite well when the app continues to allocate but fails if the app stops to allocate frequently (for whatever reason) and so a lot of memory is wasted and not given back to the arena / freed.

Modifications:

- Add a ThreadExecutorMap that offers multiple methods that wrap Runnable / ThreadFactory / Executor and allow to call ThreadExecutorMap.currentEventExecutor() to get the current executing EventExecutor for the calling Thread.
- Use these methods in the constructors of our EventExecutor implementations (which also covers the EventLoop implementations)
- Add io.netty.allocator.cacheTrimIntervalMillis system property which can be used to specify a fixed rate / interval on which we should try to trim the PoolThreadCache for a EventExecutor that allocates.
- Add PooledByteBufAllocator.trimCurrentThreadCache() to allow the user to trim the cache of the calling thread manually.
- Add testcases
- Introduce FastThreadLocal.getIfExists()

Result:

Allow to better / more frequently trim PoolThreadCache and so give back memory to the area / system.
2019-03-22 11:08:37 +01:00
Norman Maurer 35bc73f9b0
Update to new netty-build version to be able to correctly detect copyright header in property files. (#8967)
Motivation:

https://github.com/netty/netty/pull/8963 adds property files which contains a netty copyright header but our old checkstyle regex did not correct detect these.

Modifications:

Update to new netty-build which contains an updated regex.

Result:

Be able to correctly detect copyright headers in property files.
2019-03-22 10:54:11 +01:00
Nick Hill daf63373bf AbstractChannelHandlerContext doesn't need to extend DefaultAttributeMap (#8960)
Motivation:

It appears this was an oversight, maybe was valid at some point in the past. Noticed while reviewing #8958.

Modifications:

Change AbstractChannelHandlerContext to not extend DefaultAttributeMap.

Result:

Simpler hierarchy, eliminate unused attributes field from each context instance.
2019-03-21 08:49:26 +01:00
Norman Maurer 9b1a59df38
Remove old internal code that is not used anymore after removing usage of ObjectCleaner (#8956)
Motivation:

We dont use ObjectCleaner in our FastThreadLocal anymore so we also dont need to take special care to store it there anymore.

Modifications:

Remove code that is not needed anymore.

Result:

Code cleanup.
2019-03-20 08:33:06 +01:00
Norman Maurer cb231e9796 Remove test.log file that was commited by mistake.
Motivation:

We commit a test.log file by mistake.

Modifications:

Remove the file.

Result:

Cleanup repo.
2019-03-19 17:56:59 +01:00
Norman Maurer 32bca66794 Add .gitignore for docker-sync stuff
Motivation:

df8b9d3fb9 added config files for docker-sync but missed to add a gitignore for .docker-sync

Modifications:

Add .docker-sync to gitignore

Result:

Ignore .docker-sync directory
2019-03-19 14:03:15 +01:00
Norman Maurer c7248d84b5
Let GlobalEventExecutor implement OrderedEventExecutor (#8952)
Motivation:

GlobalEventExecutor does already provide all guarantees of OrderedEventExecutor so it should implement it.

Modifications:

Let GlobalEventExecutor implement OrderedEventExecutor.

Result:

Make it more clear how execution order is handled in GlobalEventExecutor.
2019-03-19 11:39:20 +01:00
Lunfu Zhong e7b3195570 Support ALLOW_HALF_CLOSURE channel option on Unix domain socket. (#8932)
Motivation:

Since DomainSocketChannel is a DuplexChannel,  which be able to shutdown input or output individually on demands, but ALLOW_HALF_CLOSURE channel option has not been supported yet.

I thought this could be a missing feature of Unix domain socket, so here the PR for it.

Modifications:

1. Added allHalfClosure property both in  EpollDomainSocketChannelConfig and KQueueDomainSocketChannelConfig,
2. Enabled isAllowHalfClosure method of native channel to support domain channel config,
3. Created EpollDomainSocketShutdownOutputByPeerTest and KQueueDomainSocketShutdownOutputByPeerTest to verify the change.

Result:

ALLOW_HALF_CLOSURE channel option can be set with DomainSocketChannel, and no more warning of Unknown channel option 'ALLOW_HALF_CLOSURE'.
2019-03-19 11:24:07 +01:00
Norman Maurer df8b9d3fb9
Add docker-sync config to step up docker-usage on macOS. (#8948)
Motivation:

docker-sync.io helps to speed up docker FS access on macOS and so make builds there a lot faster. We should add some config to help users use it.

Modifications:

Add docker-sync configs for centos-6.18 which is what we use for releases.

Result:

Faster builds via docker and when using macOS possible.
2019-03-19 08:35:49 +01:00
Enrico Olivelli eb1d12c757 Expose the global direct memory counter. (#8945)
Motivation:
This counter is very useful in order to monitor Netty without having every ByteBufAllocator in the JVM

Modification:
Expose the value of DIRECT_MEMORY_COUNTER as we are already doing for DIRECT_MEMORY_LIMIT.
We are returning -1 in case that DIRECT_MEMORY_COUNTER is not available.

Result:

Be able to get the amount of direct memory used.
2019-03-19 08:34:35 +01:00
Norman Maurer e9ce5048df
Correctly produce ssl alert when certificate validation fails on the client-side when using native SSL implementation. (#8949)
Motivation:

When the verification of the server cert fails because of the used TrustManager on the client-side we need to ensure we produce the correct alert and send it to the remote peer before closing the connection.

Modifications:

- Use the correct verification mode on the client-side by default.
- Update tests

Result:

Fixes https://github.com/netty/netty/issues/8942.
2019-03-18 18:42:11 +01:00
Norman Maurer d0fb41e529
Adjust testsuite-osgi to resolve bundles from local build (#8944)
Motivation:

testsuite-osgi currently resolve its bundles from the local / remote maven repository, which means you will need to do `mvn install` before it can pick up the bundles. Beside this this also means that you may pick up old versions if you forgot to call `install` before running it.

Modifications:

Use alta-maven-plugin to be able to resolve bundles from the local build directory during the build.

Result:

No need to install jars before running the OSGI testsuite and ensure we always test with the latest jars.
2019-03-18 09:27:43 +01:00
Norman Maurer eab849176b
Fix typo in NativeLibraryLoader debug log message (#8947)
Motivation:

We had a typo in NativeLibraryLoader debug log message which could misslead the user.

Modifications:

Fix typo to correctly state java.library.path

Result:

Correct and less confusing log message
2019-03-16 14:27:48 +01:00
violetagg c8daea3045 Fix HttpUtil.isKeepAlive to behave correctly when Connection is a comma separated list (#8924)
Motivation:

According to the specification, the "Connection" header's syntax is:

"
The Connection header field's value has the following grammar:

     Connection        = 1#connection-option
     connection-option = token

Connection options are case-insensitive.
"
https://tools.ietf.org/html/rfc7230#section-6.1

This means that Connection's value can have at least one element or
a comma separated list with elements
When calculating whether the connection can remain open,
HttpUtil.isKeepAlive(HttpMessage) should take this into account.

Modifications:

- Check for "close" and "keep-alive" in a comma separated list
- Add unit test

Result:

HttpUtil.isKeepAlive(HttpMessage) works correctly when "Connection: Upgrade, close"
2019-03-13 14:28:28 +01:00
Norman Maurer c20c754d78
Fail build when Illegal reflective access is detected (#8933)
Motivation:

We want to make the experience as smooth as possible for our users when using Java9+ and so should ensure we do not produce any 'Illegal reflective access' errors when using netty.

Modifications:

Add jvmArgs when running our tests that will deny reflective access and so will fail the build at the end due not be able to load some classes.

Result:

Ensure we do not produce any illegal refelctive access errors when using java9+
2019-03-13 09:47:02 +01:00
Norman Maurer 5eb91d9ca1
Remove --add-opens=java.base/java.nio=ALL-UNNAMED when running tests as it is not needed anymore since a long time (#8934)
Motivation:

At some point we needed --add-opens=java.base/java.nio=ALL-UNNAMED to run our native tests but this is not true anymore.

Modifications:

Remove --add-opens=java.base/java.nio=ALL-UNNAMED when running native tests.

Result:

Remove obsolate jvm arg.
2019-03-13 08:25:10 +01:00
Norman Maurer 0ee067082b
Add unit test for query TXT records. (#8923)
Motivation:

We did not have any unit tests that queries for TXT records.

Modifications:

Add unit test to query TXT records.

Result:

More test-coverage.
2019-03-09 21:41:28 +01:00
root 92b19cfedd [maven-release-plugin] prepare for next development iteration 2019-03-08 08:55:45 +00:00
root ff7a9fa091 [maven-release-plugin] prepare release netty-4.1.34.Final 2019-03-08 08:51:34 +00:00
Nick Hill b2eaab092b Optimize Hpack and AsciiString hashcode and equals (#8902)
Motivation:

While looking at hpack header-processing hotspots I noticed some low
level too-big-to-inline methods which can be shrunk.

Modifications:

Reduce bytecode size and/or runtime operations used for the following
methods:

PlatformDependent0.equals(byte[], ...)
PlatformDependent0.equalsConstantTime(byte[], ...)
PlatformDependent0.hashCodeAscii(byte[],int,int)
PlatformDependent.hashCodeAscii(CharSequence)

Result:

Existing benchmarks show decent improvement

Before

Benchmark                     (size)   Mode  Cnt         Score         Error  Units
HpackUtilBenchmark.newEquals   SMALL  thrpt    5  17200229.374 ± 1701239.198  ops/s
HpackUtilBenchmark.newEquals  MEDIUM  thrpt    5   3386061.629 ±   72264.685  ops/s
HpackUtilBenchmark.newEquals   LARGE  thrpt    5    507579.209 ±   65883.951  ops/s

After

Benchmark                     (size)   Mode  Cnt         Score         Error  Units
HpackUtilBenchmark.newEquals   SMALL  thrpt    5  29221527.058 ± 4805825.836  ops/s
HpackUtilBenchmark.newEquals  MEDIUM  thrpt    5   6556251.645 ±  466115.199  ops/s
HpackUtilBenchmark.newEquals   LARGE  thrpt    5    879828.889 ±  148136.641  ops/s

Before

Benchmark                          (size)  Mode  Cnt     Score     Error  Units
PlatformDepBench.unsafeBytesEqual       4  avgt   10     4.263 ±   0.110  ns/op
PlatformDepBench.unsafeBytesEqual      10  avgt   10     5.206 ±   0.133  ns/op
PlatformDepBench.unsafeBytesEqual      50  avgt   10     8.160 ±   0.320  ns/op
PlatformDepBench.unsafeBytesEqual     100  avgt   10    13.810 ±   0.751  ns/op
PlatformDepBench.unsafeBytesEqual    1000  avgt   10    89.077 ±   7.275  ns/op
PlatformDepBench.unsafeBytesEqual   10000  avgt   10   773.940 ±  24.579  ns/op
PlatformDepBench.unsafeBytesEqual  100000  avgt   10  7546.807 ± 110.395  ns/op

After

Benchmark                          (size)  Mode  Cnt     Score     Error  Units
PlatformDepBench.unsafeBytesEqual       4  avgt   10     3.337 ±   0.087  ns/op
PlatformDepBench.unsafeBytesEqual      10  avgt   10     4.286 ±   0.194  ns/op
PlatformDepBench.unsafeBytesEqual      50  avgt   10     7.817 ±   0.123  ns/op
PlatformDepBench.unsafeBytesEqual     100  avgt   10    11.260 ±   0.412  ns/op
PlatformDepBench.unsafeBytesEqual    1000  avgt   10    84.255 ±   2.596  ns/op
PlatformDepBench.unsafeBytesEqual   10000  avgt   10   591.892 ±   5.136  ns/op
PlatformDepBench.unsafeBytesEqual  100000  avgt   10  6978.859 ± 285.043  ns/op
2019-03-08 06:55:11 +01:00
Norman Maurer 3e24e9f6ff
ReferenceCountedOpenSslEngines SSLSession must provide local certific… (#8918)
Motivation:

The SSLSession that is returned by SSLEngine.getHandshakeSession() must be able to provide the local certificates when the TrustManager is invoked on the server-side.

Modifications:

- Correctly return the local certificates
- Add unit test

Result:

Be able to obtain local certificates from handshake SSLSession during verification on the server side.
2019-03-08 06:47:28 +01:00
Norman Maurer 67663fa7d1
HttpContentDecoder must continue read when it did not produce any mes… (#8922)
Motivation:

When HttpContentDecoder (and so HttpContentDecompressor) does not produce any message we need to make sure it calls ctx.read() if auto read is false to not stale.

Modifications:

- Keep track if we need to call ctx.read() or not
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/8915.
2019-03-07 10:31:51 +01:00
Norman Maurer 1725504a37
Do not use GetPrimitiveArrayCritical(...) due multiple not-fixed bugs… (#8921)
* Do not use GetPrimitiveArrayCritical(...) due multiple not-fixed bugs related to GCLocker

Motivation:

GetPrimitiveArrayCritical(...) may cause multiple not-fixed bugs related to the GCLocker while there is little gain for our use-case. We should just use GetByteArrayRegion(...) and copy into a small on-stack buffer.

See also:

- https://shipilev.net/jvm/anatomy-quarks/9-jni-critical-gclocker/#_g1
- https://bugs.openjdk.java.net/browse/JDK-8048556
- https://bugs.openjdk.java.net/browse/JDK-8057573
- https://bugs.openjdk.java.net/browse/JDK-8057586

Special thanks to @jayv @shipilev @apangin for the pointers.

Modifications:

Replace GetPrimitiveArrayCritical(...) with GetByteArrayRegion(...)

Result:

Less risks hitting GCLocker related bugs.
2019-03-07 10:30:55 +01:00
Norman Maurer 0de5402337
Add interopt tests between Conscrypt and OpenSSL SSLEngine implementations. (#8919)
Motivation:

In the past we found a lot of SSL related bugs because of the interopt tests we have in place between different SSLEngine implementations. We should have as many of these interopt tests as possible for this reason.

Modifications:

- Add interopt tests between Conscrypt and OpenSSL SSLEngine implementations

Result:

More tests for SSL.
2019-03-07 09:36:59 +01:00
Oleksii Kachaiev a651804f9d Carefully manage Keep-Alive connections in HttpStaticFileServer (#8914)
Motivation:

Simple rules:

* close the connection when sending any error
* specify "Connection: close" header when closing the connection
* successful responses should keep the connection intact when otherwise is not requested by the client

Modifications:

* "send response and cleanup the connection" logic moved to a helper
* for all successful responses set "Content-Lenght" header
* do not specify "Connection: Keep-Alive" header as far it's a default for HTTP/1.1
* set "Connection: close" header when necessary

Result:

Keep-Alive connections management is inlined with RFCs.
2019-03-06 15:51:58 +01:00
Norman Maurer 39fcdb3e0d
Support delegating task when using ReferenceCountedOpenSslEngine. (#8859)
Motivation:

SSLEngine API has a notion of tasks that may be expensive and offload these to another thread. We did not support this when using our native implementation but can now for various operations during the handshake.

Modifications:

- Support offloading tasks during the handshake when using our native SSLEngine implementation
- Correctly handle the case when NEED_TASK is returned and nothing was consumed / produced yet

Result:

Be able to offload long running tasks from the EventLoop when using SslHandler with our native SSLEngine.
2019-03-05 09:17:18 +01:00
Norman Maurer 452abd9b51
Correctly monkey-patch id also in whe os / arch is used within library name. (#8913)
Motivation:

2bb9f64e16 introduced a change which made it possible to use different shaded versions of netty-tcnative on the classpath. This only partly worked as we did not correctly handled the case when os / arch is part of the library name (which is the case when netty-tcnative-boringssl-static is used with the uber jar).

Modifications:

- If patching the ID failed we retry again with the os / arch stripped
- Add unit tests to verify that patching ID now works with and without os / arch as suffix.

Result:

Using multiple shaded version of netty-tcnative-boringssl-static on MacOS works.
2019-03-05 09:10:26 +01:00
Norman Maurer 14ef469f31
Use maven plugin to prevent API/ABI breakage as part of build process (#8904)
Motivation:

Netty is very widely used which can lead to a lot of pain when we break API / ABI. We should make use japicmp-maven-plugin during the build to verify we do not introduce breakage by mistake.

Modifications:

- Add japicmp-maven-plugin to the build process
- Fix a method signature change in HttpProxyHandler that was flagged as a possible problem.

Result:

Ensure no API/ABI breakage accour between releases.
2019-03-01 19:42:29 +01:00
Norman Maurer 6f507dfeed
Only remove ReferenceCountedOpenSslEngine from OpenSslEngineMap when engine is destroyed (#8905)
Motivation:

We must only remove ReferenceCountedOpenSslEngine from OpenSslEngineMap when engine is destroyed as the verifier / certificate callback may be called multiple times when the remote peer did initiate a renegotiation.
If we fail to do so we will cause an NPE like this:

```
13:16:36.750 [testsuite-oio-worker-5-18] DEBUG i.n.h.s.ReferenceCountedOpenSslServerContext - Failed to set the server-side key material
java.lang.NullPointerException: null
	at io.netty.handler.ssl.OpenSslKeyMaterialManager.setKeyMaterialServerSide(OpenSslKeyMaterialManager.java:69)
	at io.netty.handler.ssl.ReferenceCountedOpenSslServerContext$OpenSslServerCertificateCallback.handle(ReferenceCountedOpenSslServerContext.java:212)
	at io.netty.internal.tcnative.SSL.readFromSSL(Native Method)
	at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.readPlaintextData(ReferenceCountedOpenSslEngine.java:575)
	at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1124)
	at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1236)
	at io.netty.handler.ssl.ReferenceCountedOpenSslEngine.unwrap(ReferenceCountedOpenSslEngine.java:1279)
	at io.netty.handler.ssl.SslHandler$SslEngineType$1.unwrap(SslHandler.java:217)
	at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1330)
	at io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1237)
	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1274)
	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	at io.netty.channel.oio.AbstractOioByteChannel.doRead(AbstractOioByteChannel.java:170)
	at io.netty.channel.oio.AbstractOioChannel$1.run(AbstractOioChannel.java:40)
	at io.netty.channel.ThreadPerChannelEventLoop.run(ThreadPerChannelEventLoop.java:69)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.base/java.lang.Thread.run(Thread.java:834)
```

While the exception is kind of harmless (as we will reject the renegotiation at the end anyway) it produces some noise in the logs.

Modifications:

Don't remove engine from map after handshake is complete but wait for it to be removed until the engine is destroyed.

Result:

No more NPE and less noise in the logs.
2019-03-01 19:31:06 +01:00
Norman Maurer ef3e98d905
Add docker-compose config to run build with OpenJ9 JVM (#8903)
Motivation:

To ensure Netty works on different JVMs we should also run tests on the CI with these.

Modifications:

Add docker-compose config to run build with OpenJ9 JVM

Result:

Ensure Netty works with different JVMs
2019-03-01 11:28:51 +01:00
Norman Maurer 90ea3ec9f6
Adjust tests to be able to build / test when using IBM J9 / OpenJ9 (#8900)
Motivation:

We should run a CI job using J9 to ensure netty also works when using different JVMs.

Modifications:

- Adjust PooledByteBufAllocatorTest to be able to complete faster when using a JVM which takes longer when joining Threads (this seems to be the case with J9).
- Skip UDT tests on J9 as UDT is not supported there.

Result:

Be able to run CI against J9.
2019-03-01 06:47:56 +01:00
Konstantin Lutovich e609b5eeb7 Close consumed inputs in ChunkedWriteHandler (#8876)
Motivation:

ChunkedWriteHandler needs to close both successful and failed
ChunkInputs. It used to never close successful ones.

Modifications:

* ChunkedWriteHandler always closes ChunkInput before completing
the write promise. 
* Ensure only ChunkInput#close() is invoked
on a failed input.
* Ensure no methods are invoked on a closed input.

Result:

Fixes https://github.com/netty/netty/issues/8875.
2019-02-28 21:13:56 +01:00
Nick Hill 0811409ca3 Further reduce ensureAccessible() overhead (#8895)
Motivation:

This PR fixes some non-negligible overhead discovered in the ByteBuf
accessibility (non-zero refcount) checking. The cause turned out to be
mostly twofold:
- Unnecessary operations used to calculate the refcount from the "raw"
encoded int field value
- Call stack depths exceeding the default limit for inlining, in some
places (CompositeByteBuf in particular)

It's a follow-on from #8882 which uses the maxCapacity field for a
simpler non-negative check. The performance gap between these two
variants appears to be _mostly_ closed, but there's one exception which
may warrant further analysis.

Modifications:

- Replace ABB.internalRefCount() with ByteBuf.isAccessible(), the
default still checks for non-zero refCnt()
- Just test for parity of raw refCnt instead of converting to "real",
with fast-path for specific small values
- Make sure isAccessible() is delegated by derived/wrapper ByteBufs
- Use existing freed flag in CompositeByteBuf for faster isAccessible()
- Manually inline some calls in methods like CompositeByteBuf.setLong()
and AbstractReferenceCountedByteBuf.isAccessible() to reduce stack
depths (to ensure default inlining limit isn't hit)
- Add ByteBufAccessBenchmark which is an extension of
UnsafeByteBufBenchmark (maybe latter could now be removed)

Results:

Before:

Benchmark   (bufferType)  (checkAccessible)  (checkBounds)   Mode  Cnt
Score          Error  Units
readBatch         UNSAFE               true           true  thrpt   30
84524972.863 ±   518338.811  ops/s
readBatch   UNSAFE_SLICE               true           true  thrpt   30
38608795.037 ±   298176.974  ops/s
readBatch           HEAP               true           true  thrpt   30
80003697.649 ±   974674.119  ops/s
readBatch      COMPOSITE               true           true  thrpt   30
18495554.788 ±   108075.023  ops/s
setGetLong        UNSAFE               true           true  thrpt   30
247069881.578 ± 10839162.593  ops/s
setGetLong  UNSAFE_SLICE               true           true  thrpt   30
196355905.206 ±  1802420.990  ops/s
setGetLong          HEAP               true           true  thrpt   30
245686644.713 ± 11769311.527  ops/s
setGetLong     COMPOSITE               true           true  thrpt   30
83170940.687 ±   657524.123  ops/s
setLong           UNSAFE               true           true  thrpt   30
278940253.918 ±  1807265.259  ops/s
setLong     UNSAFE_SLICE               true           true  thrpt   30
202556738.764 ± 11887973.563  ops/s
setLong             HEAP               true           true  thrpt   30
280045958.053 ±  2719583.400  ops/s
setLong        COMPOSITE               true           true  thrpt   30
121299806.002 ±  2155084.707  ops/s


After:

Benchmark   (bufferType)  (checkAccessible)  (checkBounds)   Mode  Cnt
Score          Error  Units
readBatch         UNSAFE               true           true  thrpt   30
101641801.035 ±  3950050.059  ops/s
readBatch   UNSAFE_SLICE               true           true  thrpt   30
84395902.846 ±  4339579.057  ops/s
readBatch           HEAP               true           true  thrpt   30
100179060.207 ±  3222487.287  ops/s
readBatch      COMPOSITE               true           true  thrpt   30
42288494.472 ±   294919.633  ops/s
setGetLong        UNSAFE               true           true  thrpt   30
304530755.027 ±  6574163.899  ops/s
setGetLong  UNSAFE_SLICE               true           true  thrpt   30
212028547.645 ± 14277828.768  ops/s
setGetLong          HEAP               true           true  thrpt   30
309335422.609 ±  2272150.415  ops/s
setGetLong     COMPOSITE               true           true  thrpt   30
160383609.236 ±   966484.033  ops/s
setLong           UNSAFE               true           true  thrpt   30
298055969.747 ±  7437449.627  ops/s
setLong     UNSAFE_SLICE               true           true  thrpt   30
223784178.650 ±  9869750.095  ops/s
setLong             HEAP               true           true  thrpt   30
302543263.328 ±  8140104.706  ops/s
setLong        COMPOSITE               true           true  thrpt   30
157083673.285 ±  3528779.522  ops/s

There's also a similar knock-on improvement to other benchmarks (e.g.
HPACK encoding/decoding) as shown in #8882.

For sanity I did a final comparison of the "fast path" tweak using one
of the HPACK benchmarks:

(rawCnt & 1) == 0:

Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode
Cnt      Score     Error  Units
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt
30  50914.479 ± 940.114  ops/s


rawCnt == 2 || rawCnt == 4 || rawCnt == 6 || rawCnt == 8 ||  (rawCnt &
1) == 0:

Benchmark                     (limitToAscii)  (sensitive)  (size)   Mode
Cnt      Score      Error  Units
HpackDecoderBenchmark.decode            true         true  MEDIUM  thrpt
30  60036.425 ± 1478.196  ops/s
2019-02-28 20:40:41 +01:00
Norman Maurer 625c4e8286
Tighten up contract of PromiseCombiner and so make it more safe to use (#8886)
Motivation:

PromiseCombiner is not thread-safe and even assumes all added Futures are using the same EventExecutor. This is kind of fragile as we do not enforce this. We need to enforce this contract to ensure it's safe to use and easy to spot concurrency problems.

Modifications:

- Add new contructor to PromiseCombiner that takes an EventExecutor and deprecate the old non-arg constructor.
- Check if methods are called from within the EventExecutor thread and if not fail
- Correctly dispatch on the right EventExecutor if the Future uses a different EventExecutor to eliminate concurrency issues.

Result:

More safe use of PromiseCombiner + enforce correct usage / contract.
2019-02-28 20:32:04 +01:00
Norman Maurer c6d3792df0
Correctly resume wrap / unwrap when SslTask execution completes (#8899)
Motivation:

fa6a8cb09c introduced correct dispatching of delegated tasks for SSLEngine but did not correctly handle some cases for resuming wrap / unwrap after the task was executed. This could lead to stales, which showed up during tests when running with Java11 and BoringSSL.

Modifications:

- Correctly resume wrap / unwrap in all cases.
- Fix timeout value which was changed in previous commit by mistake.

Result:

No more stales after task execution.
2019-02-28 20:29:40 +01:00
Norman Maurer d3d0b6478b
Update JDK12 and 13 to latest EA releases. (#8809)
Motivation:

We use outdated EA releases when building and testing with JDK 12 and 13.

Modifications:

- Update versions.
- Add workaround for possible JDK12+ bug.

Result:

Use latest releases
2019-02-28 13:54:04 +01:00
Norman Maurer 215b61e8e2
Add test for Iterator.remove() on KObjectHashMap.values().iterator() (#8891)
Motivation:

https://github.com/netty/netty/pull/8866 added support for calling Iterator.remove() but did not add a testcase.

Modifications:

Add testcase to ensure removal works.

Result:

Better test-coverage.
2019-02-27 12:06:13 +01:00
Michael André Pearce e4d4775a10 Support removal using values iterator. (#8866)
Motivation:

As ActiveMQ project using netty, we want to make use of this class, unfortunately the iterator on values(), seems to not support remove method, even so the delegated iterator does. Currently we have to clone and modify this class locally albeit a one line change is needed, it would be ideal if netty could allow remove, then removing the need to maintain a clone.  

Modifications:

* remove throws UnsupportedOperationException, and instead call remove method on delegated iterator

Result:

Be able to call Iterator.remove() for the values.
2019-02-26 21:02:56 +01:00
Norman Maurer 81e43d5088
DefaultFileRegion.transferTo with invalid count may cause busy-spin (#8885)
Motivation:

`DefaultFileRegion.transferTo` will return 0 all the time when we request more data then the actual file size. This may result in a busy spin while processing the fileregion during writes.

Modifications:

- If we wrote 0 bytes check if the underlying file size is smaller then the requested count and if so throw an IOException
- Add DefaultFileRegionTest
- Add a test to the testsuite

Result:

Fixes https://github.com/netty/netty/issues/8868.
2019-02-26 11:08:09 +01:00
Dmitriy Dumanskiy 5d448377e9 Avoid unnecessary char casts for CookieEncoder (#8827)
Motivation:

Avoid unnecessary (char) casts by changing variables types.

Modifications:

Use chars directly.

Result:

Less casts.
2019-02-25 19:50:19 +01:00
Norman Maurer d02b51965f
Don't deregister Channel as part of closing it when using native kqueue transport (#8881)
Motivation:

In https://github.com/netty/netty/pull/8665 we changed how we handle the registration of Channels to KQueue but missed to removed some code which would deregister the Channel before it actual closed the underlying socket. This could lead to have events triggered still while not have a mapping to the Channel anymore.

Modifications:

Remove deregister call during socket closure.

Result:

Fixes https://github.com/netty/netty/issues/8849.
2019-02-25 08:55:55 +01:00
Norman Maurer f176384a72
Include the original Exception that caused the Channel to be closed in the ClosedChannelException (#8863)
Motivation:

To make it easier to understand why a Channel was closed previously and so why the operation failed with a ClosedChannelException we should include the original Exception.

Modifications:

- Store the original exception that lead to the closed Channel and include it in the ClosedChannelException that is used to fail the operation.
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/8862.
2019-02-15 13:13:17 -08:00
Norman Maurer 1c6191c166
Do not depend on the implementation detail of Unpooled.buffer(int) when accessing backing array. (#8865)
Motivation:

We should not depend on the implementation detail of Unpooled.buffer(int) to allocate the exact size of backing byte[] as depending on the implementation it may return a buffer with a bigger backing array.

Modifications:

Explicit allocate the byte[] and wrap it in the ByteBuf. This way we are sure that ByteBuf.array() returns an byte[] which has the exact length and content we expect.

Result:

More correct and safe usage of ByteBuf.array()
2019-02-15 09:38:36 -08:00
Eric Anderson 098705040d Log the shaded form of native workdir system property (#8867)
Motivation:

When users' /tmp is noexec, NativeLibraryLoader logs a message informing
them how to fix the problem by setting a system property. However, if
Netty has been shaded that message will tell them to set the un-shaded
system property name, which won't work.

Modifications:

Change the code to let shading tools rename the native.workdir property
name reference within user-visible log messages.

Notably, debug logs were _not_ changed, as there's many debug statements
including a variety of property names. Fixing them would be a much more
invasive change and have limited benefit.

Result:

The users will see the correctly-named system property to set if they
are using a noexec /tmp.
2019-02-14 15:18:37 -08:00
Artem Morozov 8fecbab2c5 Handle null "origin" header in "Old Hixie 75 handshake" as proper bad request. (#8864)
Motivation:

Gracefully respond on bad client request.
We have a set of errors produced by Android 7.1.1/7.1.2 clients where both headers `HttpHeaderNames.SEC_WEBSOCKET_VERSION` and `HttpHeaderNames.ORIGIN` are not present. Absence of the first headers leads to WebSocketServerHandshaker00 be applied as a handshaker. However, null 2nd header causes

```
java.lang.NullPointerException: value
 io.netty.util.internal.ObjectUtil.checkNotNull(ObjectUtil.java:33)
 io.netty.handler.codec.DefaultHeaders.addObject(DefaultHeaders.java:327)
 io.netty.handler.codec.http.DefaultHttpHeaders.add(DefaultHttpHeaders.java:123)
 io.netty.handler.codec.http.websocketx.WebSocketServerHandshaker00.newHandshakeResponse(WebSocketServerHandshaker00.java:162)
```
Which causes connection close with unclear reason.

Modification:

Added null-check, and in case of null an appropriate WebSocketHandshakeException is thrown.

Result:

In case of null `HttpHeaderNames.ORIGIN` header a WebSocketHandshakeException is caught by WebSocketServerProtocolHandler which sends a graceful `BAD_REQUEST`.
2019-02-13 17:14:58 -08:00
Rukshani Athapathu c68e85b749 Fix h2c upgrade failure when multiple connection headers are present in upgrade request (#8848)
Motivation:

When more than one connection header is present in h2c upgrade request, upgrade fails. This is to fix that.

Modification:
In HttpServerUpgradeHandler's upgrade() method, check whether any of the connection header value is upgrade, not just the first header value which might return a different value other than upgrade.

Result:
Fixes #8846.

With this PR, now when multiple connection headers are sent with the upgrade request, upgrade will not fail.
2019-02-12 08:05:30 -08:00
Norman Maurer fa6a8cb09c
Support using an Executor to offload blocking / long-running tasks wh… (#8847)
Motivation:

The SSLEngine does provide a way to signal to the caller that it may need to execute a blocking / long-running task which then can be offloaded to an Executor to ensure the I/O thread is not blocked. Currently how we handle this in SslHandler is not really optimal as while we offload to the Executor we still block the I/O Thread.

Modifications:

- Correctly support offloading the task to the Executor while suspending processing of SSL in the I/O Thread
- Add new methods to SslContext to specify the Executor when creating a SslHandler
- Remove @deprecated annotations from SslHandler constructor that takes an Executor
- Adjust tests to also run with the Executor to ensure all works as expected.

Result:

Be able to offload long running tasks to an Executor when using SslHandler. Partly fixes https://github.com/netty/netty/issues/7862 and https://github.com/netty/netty/issues/7020.
2019-02-11 09:47:44 +01:00
Norman Maurer c6a90d90a6
Add more tests to KQueue and Epoll testsuites. (#8851)
Motivation:

We missed to extend a few tests from the testsuite and so also run these with our native KQueue and Epoll transport.

Modifications:

Extend tests and so run these for our native transports as well.

Result:

More tests.
2019-02-08 20:08:34 +01:00
Norman Maurer 7375193141
Don't update state of PromiseCombiner when finish(null) is called (#8843)
Motivation:

When we fail a call to PromiseCombiner.finish(...) because of a null argument we must not update the internal state before throwing.

Modifications:

- First do the null check and only after we validated that the argument is not null update the internal state
- Add test case.

Modifications:

Do not mess up internal state of PromiseCombiner when finish(...) is called with a null argument.

Result:

After your change, what will change.
2019-02-04 19:07:42 +01:00
田欧 4c64c98f34 use checkPositive/checkPositiveOrZero (#8835)
Motivation:

We can replace some "hand-rolled" integer checks with our own static utility method to simplify the code.

Modifications:

Use methods provided by `ObjectUtil`.

Result:

Cleaner code and less duplication
2019-02-04 16:01:49 +01:00
Dmitriy Dumanskiy b72fea340b Improve DateFormatter parsing performance (#8821)
Motivation:

Just was looking through code and found 1 interesting place DateFormatter.tryParseMonth that was not very effective, so I decided to optimize it a bit.

Modification:

Changed DateFormatter.tryParseMonth method. Instead of invocation regionMatch() for every month - compare chars one by one.

Result:

DateFormatter.parseHttpDate method performance improved from ~3% to ~15%.

Benchmark                                                                (DATE_STRING)   Mode  Cnt        Score       Error  Units
DateFormatter2Benchmark.parseHttpHeaderDateFormatter     Sun, 27 Jan 2016 19:18:46 GMT  thrpt    6  4142781.221 ± 82155.002  ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatter     Sun, 27 Dec 2016 19:18:46 GMT  thrpt    6  3781810.558 ± 38679.061  ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew  Sun, 27 Jan 2016 19:18:46 GMT  thrpt    6  4372569.705 ± 30257.537  ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew  Sun, 27 Dec 2016 19:18:46 GMT  thrpt    6  4339785.100 ± 57542.660  ops/s
2019-02-04 10:04:20 +01:00
Roger Kapsi 32563bfcc1 Selective Message Aggregation (#8793)
Motivation

Implementations of MessageAggregator (HttpObjectAggregator in particular) may wish to
selectively aggrerage requests and responses on a case-by-case basis such as for example
only POST requests or only responses of a certain content-type.

Modifications

Adding a flag to MessageAggregator that toggles between true/false depending on if aggregation
is desired for the current message or not.

Result

Fixes #8772
2019-02-04 09:57:54 +01:00
Carl Mastrangelo 95bc819513 http-proxy: attach headers to connection exception (#8824)
Motivation:
When a proxy fails to connect, it includes useful error detail in
the headers.

Modification:
- Add an HTTP Specific ProxyConnectException
- Attach headers (if any) in the event of a non-200 response

Result:
Able to surface more useful error info to applications
2019-02-02 07:16:36 +01:00
Norman Maurer 7f61055cbd
Reduce direct memory overhead per EpollEventLoop when using EpollDatagramChannel (#8825)
Motivation:

When using a linux distribution that supports sendmmsg(...) we allocated enough direct memory per EpollEventLoop to be able to write IOV_MAX number of iovecs per message that can be written per sendmmsg.
The number of messages that can be written per sendmmsg(...) call is limited by UIO_MAX_IOV.

In practice this resulted in an allocation of 16MB direct memory per EpollEventLoop instance that stayed allocated until the EpollEventLoop was shutdown which happens as part of the shutdown of the enclosing EpollEVentLoopGroup.

This resulted in quite some heavy direct memory usage in practice even when in practice we have very slim changes to ever need all of the memory.

Modification:

Adjust NativeDatagramPacketArray to share one IovArray instance across all NativeDatagramPacket instances it holds. This limits the max number of iovecs we can write across all messages to IOV_MAX per sendmmsg(...) call.
This in practice will still be enough to allow us to write multiple messages with one syscall while keep the memory overhead to a minimum.

Result:

Smaller direct memory footprint per EpollEventLoop when using EpollDatagramChannel on distributions that support sendmmsg(...).
Fixes https://github.com/netty/netty/issues/8814
2019-02-02 07:10:02 +01:00
Nick Hill 154d6e87f6 Fix varargs parameter logging in LocationAwareSlf4JLogger (#8834)
Motivation

As pointed out by @91he in
https://github.com/netty/netty/pull/8595#issuecomment-459181794, there
is a remaining bug in LocationAwareSlf4JLogger following the updates
done in #8595. The logging methods which take a varargs message
parameter array should format using MessageFormatter.arrayFormat rather
than MessageFormatter.format.

Modifications

Change varargs param methods in LocationAwareSlf4JLogger to use
MessageFormatter.arrayFormat and extend unit test to cover these cases.

Results

Correct log output when logging messages with > 2 parameters when using
LocationAwareSlf4JLogger.
2019-02-02 07:03:03 +01:00
Nick Hill 98aa5fbd66 CompositeByteBuf tidy-up (#8784)
Motivation

There's some miscellaneous cleanup/simplification of CompositeByteBuf
which would help make the code a bit clearer.

Modifications

- Simplify web of constructors and addComponents methods, reducing
duplication of logic
- Rename `Component.freeIfNecessary()` method to just `free()`, which is
less confusing (see #8641)
- Make loop in addComponents0(...) method more verbose/readable (see
https://github.com/netty/netty/pull/8437#discussion_r232124414)
- Simplify addition/subtraction in setBytes(...) methods

Result

Smaller/clearer code
2019-02-01 10:31:53 +01:00
Norman Maurer 7bba4f49cf
Reduce GC produced by native DatagramChannel implementations when in connected mode. (#8806)
Motivation:

In the native code EpollDatagramChannel / KQueueDatagramChannel creates a DatagramSocketAddress object for each received UDP datagram even when in connected mode as it uses the recvfrom(...) / recvmsg(...)  method. Creating these is quite heavy in terms of allocations as internally, char[], String, Inet4Address, InetAddressHolder, InetSocketAddressHolder, InetAddress[], byte[] objects are getting generated when constructing the object. When in connected mode we can just use regular read(...) calls which do not need to allocate all of these.

Modifications:

- When in connected mode use read(...) and NOT recvfrom(..) / readmsg(...) to reduce allocations when possible.
- Adjust tests to ensure read works as expected when in connected mode.

Result:

Less allocations and GC when using native datagram channels in connected mode. Fixes https://github.com/netty/netty/issues/8770.
2019-02-01 10:29:36 +01:00
Norman Maurer ad922fa47e
Mark ChannelHandlerAdapter.exceptionCaught(...) as @deprecated. (#8826)
Motivation:

41e03adf24 marked ChannelHandler.exceptionCaught(...) as @deprecated but missed to also mark ChannelHandlerAdapter.exceptionCaught(...) as @deprecated. We should do so as most people extend the base classes and not implement the interfaces directly.

Modifications:

Mark ChannelHandlerAdapter.exceptionCaught(...) as @deprecated as well.

Result:

Mark method as @deprecated to warn users about its removal.
2019-02-01 10:23:54 +01:00
Norman Maurer 91d3920aa2
HttpObjectDecoder ignores HTTP trailer header when empty line is rece… (#8799)
* HttpObjectDecoder ignores HTTP trailer header when empty line is received in seperate ByteBuf

Motivation:

When the empty line that termines the trailers was sent in a seperate ByteBuf we did ignore the previous parsed trailers and just returned none.

Modifications:

- Correct respect previous parsed trailers.
- Add unit test.

Result:

Fixes https://github.com/netty/netty/issues/8736
2019-01-31 20:27:47 +01:00
田欧 a33200ca38 use checkPositive/checkPositiveOrZero (#8803)
Motivation:

We have a utility method to check for > 0 and >0 arguments. We should use it.

Modification:

use checkPositive/checkPositiveOrZero instead of if statement.

Result:

Re-use utility method.
2019-01-31 09:07:14 +01:00
Norman Maurer fe4a59011a
Do not schedule notify task if there are no listeners attached to the promise. (#8797)
Motivation:

If there are no listeners attached to the promise when full-filling it we do not need to schedule a task to notify.

Modifications:

- Don't schedule a task if there is nothing to notify.
- Add unit tests.

Result:

Fixes https://github.com/netty/netty/issues/8795.
2019-01-31 08:56:01 +01:00
Dmitriy Dumanskiy ff7484864b Compare HttpMethod by reference (#8815)
Motivation:

In most cases, HttpMethod instance is built from the factory method and the same instance is taken for known Http Methods. So we can implement fast path for equals().

Modification:

Replace == checks with HttpMethod.equals;
Use this == o within HttpMethod.equals;
Replaced known new HttpMethod with HttpMethod.valueOf;
Result:

Comparisons should be a bit faster in some cases.
2019-01-30 21:17:00 +01:00
Norman Maurer a6e6a9151f
Fix AppendableCharSequence.subSequence(...) where start == end. (#8798)
Motivation:

To conform to the CharSequence interface we need to return an empty CharSequence when start == end index and a subSequence is requested.

Modifications:

- Correctly handle the case where start == end
- Add unit test

Result:

Fix https://github.com/netty/netty/issues/8796.
2019-01-30 09:45:54 +01:00
Norman Maurer 948d4a9ec5
Minimize memory footprint for AbstractChannelHandlerContext for handlers that execute in the EventExecutor. (#8786)
Motivation:

We cache the Runnable for some tasks to reduce GC pressure in 4 different fields. This gives overhead in terms of memory usage in all cases, even if we always execute in the EventExecutor (which is the case most of the times).

Modifications:

Move the 4 fields to another class and only have one reference to this in AbstractChannelHandlerContext. This gives a small overhead in the case of execution that is done outside of the EventExecutor but reduce memory footprint in the more likily execution case.

Result:

Less memory used per AbstractChannelHandlerContext in most cases.
2019-01-28 19:45:38 +01:00
Norman Maurer cd3254df88
Update to new checkstyle plugin (#8777) (#8780)
Motivation:

We need to update to a new checkstyle plugin to allow the usage of lambdas.

Modifications:

- Update to new plugin version.
- Fix checkstyle problems.

Result:

Be able to use checkstyle plugin which supports new Java syntax.
2019-01-25 11:58:42 +01:00
Nick Hill 1d5b7be3a7 Fix three bugs in CompositeByteBuf (#8773)
Motivation

In #8758, @doom369 reported an infinite loop bug in CompositeByteBuf
which was introduced in #8437.

This is the same small fix for that, along with fixes for two other bugs
found while re-inspecting the changes and adding unit tests.

Modification

- Replace recursive call to toComponentIndex with toComponentIndex0 as
intended
- Add missed "lastAccessed" racy cache invalidation in capacity(int)
method
- Fix incorrect determination of initial offset in non-zero cIndex case
of updateComponentOffsets method
- New unit tests for previously uncovered methods

Results

Fewer bugs.
2019-01-24 12:47:04 +01:00
Norman Maurer 3c2b86303a
Release message when validation of passed in ChannelPromise fails when calling write(...) / writeAndFlush(...) (#8769)
Motivation:

We need to release the message when we throw an IllegalArgumentException because of a validation failure of the promise to eliminate the risk of a memory leak.

Modifications:

- Consistently release the message before rethrow
- Add testcase.

Result:

Fixes https://github.com/netty/netty/issues/8765.
2019-01-24 07:43:04 +01:00
kezhenxu94 57012dddb4 fix typo (#8741)
Motivation:

Correct typo

Modification:

Correct typo

Result:

JavaDoc and method name are more readable
2019-01-22 08:51:31 +01:00
Stephane Landelle 0431368621 HttpUtil#is100ContinueExpected clean up (#8740)
Motivation:

Current implementation extract header value as String. We have an idiomatic way for checking presence of a header value.

Modification:

Use HttpHeaders#contains for checking if if contains Expect: 100-continue.

Result:

Use idiomatic way + simplify boolean logic.
2019-01-22 08:49:43 +01:00
root cf03ed0478 [maven-release-plugin] prepare for next development iteration 2019-01-21 12:26:44 +00:00
root 37484635cb [maven-release-plugin] prepare release netty-4.1.33.Final 2019-01-21 12:26:12 +00:00
Norman Maurer 9c192254c4 Remove duplicated declaration of dependency 2019-01-21 11:54:39 +01:00
Norman Maurer fabc6ee1bc
Fix flaky ChannelInitializerTest.testChannelInitializerEventExecutor() (#8738)
Motivation:

testChannelInitializerEventExecutor() did sometimes fail as we sometimes miss to count down the latch. This can happen when we remove the handler from the pipeline before channelUnregistered(...) was called for it.

Modifications:

Countdown the latch in handlerRemoved(...).

Result:

Fix flaky test.
2019-01-21 09:01:04 +01:00
Bartek Kowalczyk 83b286f5d9 Set result for decoded request and add test for #8721 (#8721)
Motivation:
I want to fix bug in vert.x project (eclipse-vertx/vert.x#2562) caused by ComposedLastHttpContent result being null. I don't know if it is intentional that this last decoded chuck in the issue returns null, but if not - I am providing fix for that.

Modification:
* Added new constructor in ComposedLastHttpContent allowing to pass DecoderResult
* set DecoderResult.SUCCESS for created ComposedLastHttpContent in HttpContentEncoder
* set DecoderResult.SUCCESS for created ComposedLastHttpContent in HttpContentDecoder

Result:
Fixes eclipse-vertx/vert.x#2562
2019-01-21 07:45:03 +01:00
yulianoifa-mobius 1e4481e551 Allowed IP_FREEBIND option for UDP epoll (#8728)
Motivation:

While using Load Balancers or HA support is needed there are cases when UDP channel need to bind to IP Address which is not available on network interfaces locally.

Modification:

Modified EpollDatagramChannelConfig to allow IP_FREEBIND option

Result:

Fixes ##8727.
2019-01-21 07:42:05 +01:00
kezhenxu94 a2cd246f00 cleanup: fix indent (#8734)
Motivation:

Clean up to make the code style unified.

Modification:

Fix indent

Result:

Indents are unified
2019-01-19 17:40:55 +01:00
Norman Maurer bce0784e5e
Fix racy ChannelOutboundBuffer.testWriteTaskRejected test. (#8735)
Motivation:

testWriteTaskRejected was racy as we did not ensure we dispatched all events to the executor before shutting it down.

Modifications:

Add a latch to ensure we dispatched everything.

Result:

Fix racy test that failed sometimes before.
2019-01-19 17:17:03 +01:00
Norman Maurer dae5d9d3f9
Ensure FlowControlled data frames will be correctly removed from the … (#8726)
Motivation:

When a write error happens during writing of flowcontrolled data frames we miss to correctly detect this in the write loop which may result in an infinite loop as we will never detect that the frame should be removed from the queue.

Modifications:

- When we fail a flowcontrolled data frame we ensure that the next frame.write(...) call will signal back that the whole frame was handled and so can be removed.
- Add unit test.

Result:

Fixes https://github.com/netty/netty/issues/8707.
2019-01-19 14:01:31 +01:00
Norman Maurer df5eb060f7
Only handle NXDOMAIN as failure when nameserver is authoritive or no other nameservers are left. (#8731)
Motivation:

When using multiple nameservers and a nameserver respond with NXDOMAIN we should only fail the query if the nameserver in question is authoritive or no nameservers are left to try.

Modifications:

- Try next nameserver if NXDOMAIN was returned but the nameserver is not authoritive
- Adjust testcase to respect correct behaviour.

Result:

Fixes https://github.com/netty/netty/issues/8261
2019-01-18 21:06:44 +01:00
Norman Maurer e4b9d5f9a1
Skip osgi testsuite on JDK11. (#8733)
Motivation:

Since the updating to OpenJDK 11.0.2 the OSGI testsuite fails. We should dissable it until there is a version of the used plugins that works with this OpenJDK version.

Modifications:

Skip osgi testsuite when using JDK11.

Result:

Build pass again with JDK11.
2019-01-18 20:13:49 +01:00
kezhenxu94 8ebaa1b972 enhancement: extract duplicate code (#8732)
Motivation:

Clean up code to increase readability.

Modification:

Extract duplicate code and remove unnecessary throws

Result:

Share more code.
2019-01-18 19:44:47 +01:00
Norman Maurer c893939bd8
Update to latest JDK8 and JDK11 releases (#8725)
Motivation:

We should always build with the latest JDK releases.

Modifications:

Update JDK8 and JDK11 versions to the latest.

Result:

Run CI jobs on the latest JDK release.
2019-01-17 09:14:27 +01:00
Riyafa Abdul Hameed dd54c06e1e Close connection for CorruptedFrameException (#8705)
Motivation:

The CorruptedFrameException from the finish() method of the Utf8Validator gets propagated to other handlers while the connection is still open.

Modification:

Override exceptionCaught method of the Utf8FrameValidator and close the connection if it is a CorruptedFrameException.

Result:

The CorruptedFrameException gets propagated to other handlers only after properly closing the connection.
2019-01-17 07:17:12 +01:00
Norman Maurer 46fcc7bc97
Allow to run builds with OpenJDK 13. (#8724)
Motivation:

There are the first EA bulds for OpenJDK 13. We should support to build with it and run builds on the CI.

Modifications:

- Add profile for JDK 13
- Add docker config to run with JDK 13.

Result:

Building and testing with OpenJDK 13 is possible.
2019-01-17 07:02:13 +01:00
Oleksii Kachaiev 7988cfec0a Correctly propagate write failures from ChunkedWriteHandler (#8716)
Motivation:

ChunkedWriteHandler should report write operation as failed
in case *any* chunked was not written. Right now this is not
true for the last chunk.

Modifications:

* Check if the appropriate write operation was succesfull when
  reporting the last chunk

* Skip writing chunks if the write operation was already marked
  as "done"

* Test cases to cover write failures when dealing with chunked input

Result:

Fix https://github.com/netty/netty/issues/8700
2019-01-16 11:07:59 +01:00
Dmitriy Dumanskiy 165912365a Clenaup: simplify EpollEventLoop.closeAll() (#8719)
Motivation:

Avoid unnecessary iteration and `ArrayList` allocation.

Modification:

```
for (AbstractEpollChannel channel: channels.values()) {
     array.add(channel);
}
```
replaced with 

`array.addAll(channels.values())`

and

```
Collection<AbstractEpollChannel> array = new ArrayList<AbstractEpollChannel>(channels.size());
array.addAll(channels.values())
```

replaced with:

`AbstractEpollChannel[] localChannels = channels.values().toArray(new AbstractEpollChannel[0]);`

Result:

Simpler code in `EpollEventLoop.closeAll();`
2019-01-16 11:00:25 +01:00
kezhenxu94 53d711bdc7 extract duplicate code into method (#8720)
Motivation:

Clean up code to increase readability.

Modification:

Extract duplicate code blocks into method.

Result:

Less code duplication
2019-01-16 10:56:07 +01:00
Norman Maurer c424599593
Access the Constructor of the Channel in the constructor of ReflectiveChannelFactory. (#8718)
Motivation:

We should access the Constructor of the passed in class in the Constructor of ReflectiveChannelFactory only to reduce the overhead but also fail-fast.

Modifications:

Access the Constructor early.

Result:

Fails fast and less performance overhead.
2019-01-15 08:38:13 +01:00
Derek Lewis 4ac5264f0e Remove unnecessary loop variable from `AsciiString`. (#8711)
Motivation:

Incrementing two variables in sync is not necessary when only one will do.

Modifications:

- Remove `j` from `for` loop and replace with `i`.
- Add more unit testing scenarios to cover changed code.

Results:

Unnecessary variable removed.
2019-01-15 08:33:29 +01:00
Derek Lewis 1b9cdc1f63 Updating `ByteBuf` Javadocs to represent actual behaviour. (#8709)
Motivation:

The javadocs stating `IndexOutOfBoundsException` is thrown were
different from what `ByteBuf` actually did. We want to ensure the
Javadocs represent reality.

Modifications:

Updated javadocs on `write*`, `ensureWriteable`, `capacity`, and
`maxCapacity` methods.

Results:

Javadocs more closely match actual behaviour.
2019-01-14 20:08:49 +01:00
Norman Maurer 9fb0765891
Use OpenJDK 12 EA 27 when running CI jobs for JDK 12. (#8715)
Motivation:

A new EA release was done for OpenJDK12.

Modifications:

Use OpenJDK12 EA 27 when running CI jobs for JDK 12.

Result:

Test against latest OpenJDK 12 EA build.
2019-01-14 13:33:37 +01:00
Norman Maurer 4155bc08f0
Correctly buffer multiple outbound streams if needed. (#8694)
Motivation:

In Http2FrameCodec we made the incorrect assumption that we can only have 1 buffered outboundstream as maximum. This is not correct and we need to account for multiple buffered streams.

Modifications:

- Use a map to allow buffer multiple streams
- Add unit test.

Result:

Fixes https://github.com/netty/netty/issues/8692.
2019-01-14 08:25:45 +01:00
Norman Maurer 250e2494d9
Only call handlerRemoved(...) if handlerAdded(...) was called during adding the handler to the pipeline. (#8684)
Motivation:

Due a race in DefaultChannelPipeline / AbstractChannelHandlerContext it was possible to have only handlerRemoved(...) called during tearing down the pipeline, even when handlerAdded(...) was never called. We need to ensure we either call both of none to guarantee a proper lifecycle of the handler.

Modifications:

- Enforce handlerAdded(...) / handlerRemoved(...) semantics / ordering
- Add unit test.

Result:

Fixes https://github.com/netty/netty/issues/8676 / https://github.com/netty/netty/issues/6536 .
2019-01-14 08:19:48 +01:00
Norman Maurer 82ec6ba815
Correctly detect and handle CNAME loops. (#8691)
Motivation:

We do not correctly detect loops when follow CNAMEs and so may try to follow it without any success.

Modifications:

- Correctly detect CNAME loops
- Do not cache CNAME entries which point to itself
- Add unit test.

Result:

Fixes https://github.com/netty/netty/issues/8687.
2019-01-14 08:17:44 +01:00
kashike 6fdd7fcddb Fix minor spelling issues in javadocs (#8701)
Motivation:

Javadocs contained some spelling errors, we should fix these.

Modification:

Fix spelling

Result:

Javadoc cleanup.
2019-01-14 07:24:34 +01:00
kezhenxu94 66addd485f Use camel-case in NioEventLoop (#8713)
Motivation:

Java uses camel-case by convention.

Modification:

Consistently use camel-case.

Result:

More consistent code styling.
2019-01-14 07:20:57 +01:00
Norman Maurer fa84e2b3af Cleanup HTTP/2 tests for Http2FrameCodec and Http2MultiplexCodec (#8646)
Motiviation:

Http2FrameCodecTest and Http2MultiplexCodecTest were quite fragile and often not went through the whole pipeline which made testing sometimes hard and error-prone.

Modification:

- Refactor tests to have data flow through the whole pipeline and so made the test more robust (by testing the while implementation).

Result:

Easier to write tests for the codecs in the future and more robust testing in general.

Beside this it also fixes https://github.com/netty/netty/issues/6036.
2018-12-28 15:07:01 +01:00
Jon Chambers 66ccd1483a Publicize default `explicitFlushAfterFlushes` count. (#8683)
Motivation:

Users who want to construct a `FlushConsolidationHandler` with a default `explicitFlushAfterFlushes` but non-default `consolidateWhenNoReadInProgress` may benefit from having an easy way to get the default "flush after flushes" count.

Modifications:

- Moved default `explicitFlushAfterFlushes` value to a public constant.
- Adjusted Javadoc accordingly.

Result:

Default `explicitFlushAfterFlushes` is accessible to callers.
2018-12-25 22:35:58 +01:00
Norman Maurer 6464c98743
Call FastThreadLocal.removeAll() before notify termination future of … (#8666)
Motivation:

We should try removing all FastThreadLocals for the Thread before we notify the termination. future. The user may block on the future and once it unblocks the JVM may terminate and start unloading classes.

Modifications:

Remove all FastThreadLocals for the Thread before notify termination future.

Result:

Fixes https://github.com/netty/netty/issues/6596.
2018-12-21 11:06:43 +01:00
Alex Vasiliev e2d9665707 Added comments to LineBasedFrameDecoder, JsonObjectDecoder and XmlFrameDecoder that they are only compatible with UTF-8 encoded streams. (#8651)
Motivation:

LineBasedFrameDecoder, JsonObjectDecoder and XmlFrameDecoder upon investigation of the
sourcecode appeared to only support ASCII or UTF-8 input. It is an important characteristic
and ont reflected in any documentation. This could lead to improper usage and bugs.

Modifications:

Javadoc comment is addedd to all three classes to state that implementation is only
compatible with UTF-8 or ASCII input streams and brifly touches on implementaion details.

Result:

The end user of the netty library would not have to study sorcecode to deterime character
encoding limitations for given classes.
2018-12-20 07:40:06 +01:00
Norman Maurer 9947df4a74
Add test for correctly handling SSLSessionBindingEvent when acting on th… (#8649)
Motivation:

During some other work I noticed we do not have any tests to ensure we correctly use SSLSessionBindingEvent. We should add some testing.

Modifications:

- Added unit test to verify we correctly implement it.
- Ignore the test when using Conscrypt as it not correctly implements it.

Result:

More tests for custom SSL impl.
2018-12-19 12:55:48 +01:00
Norman Maurer d77bdeaa7d
Fix ClassCastException and native crash when using kqueue transport. (#8665)
Motivation:

How we did the mapping from native code to AbstractKQueueChannel was not safe and could lead to heap corruption. This then sometimes produced ClassCastExceptions or could also lead to crashes. This happened sometimes when running the testsuite.

Modifications:

Use a Map for the mapping (just as we do in the native epoll transport).

Result:

No more heap corruption / crashes.
2018-12-19 12:13:56 +01:00
Norman Maurer db3c76ed72
Update to use OpenJDK 12 EA24 when building with Java 12 (#8672)
Motivation:

A new EA build was released for Java 12.

Modifications:

Update to OpenJDK 12 EA24

Result:

Use latest OpenJDK 12 build when building with Java 12
2018-12-19 11:28:43 +01:00
Stephane Landelle 302dac8c45 Support 1012, 1013 and 1014 WebSocket close status code (#8664)
Motivation:

RFC 6455 doesn't define close status codes 1012, 1013 and 1014.
Yet, since then, IANA has defined them and web browsers support them.

From https://www.iana.org/assignments/websocket/websocket.xhtml:

* 1012: Service Restart
* 1013: Try Again Later
* 1014: The server was acting as a gateway or proxy and received an invalid response from the upstream server. This is similar to 502 HTTP Status Code.

Modification:

Make status codes 1012, 1013 and 1014 legit.

Result:

WebSocket status codes as defined by IANA are supported.
2018-12-17 19:42:50 +01:00
Norman Maurer de38d75137
Upgrade to new version of autobahntestsuite maven plugin. (#8668)
Motivation:

A new version was released that fixes a few test-cases to allow more close codes.

Modifications:

Upgrade to 0.1.5

Result:

More compliant testing of websockets.
2018-12-17 17:25:22 +01:00
Norman Maurer 35f609ba61
Update to latest stable jython release (#8667)
Motivation:

Using the latest jython release fixes some noise that is produced by an exception that is thrown when jython is terminated.

Exception in thread "Jython-Netty-Client-4" Exception in thread "Jython-Netty-Client-7" Exception in thread "Jython-Netty-Client-5" java.lang.NoClassDefFoundError: org/python/netty/util/concurrent/DefaultPromise$2
        at org.python.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:589)
        at org.python.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:397)
        at org.python.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:151)
        at java.lang.Thread.run(Thread.java:748)
Exception in thread "Jython-Netty-Client-8" java.lang.NoClassDefFoundError: org/python/netty/util/concurrent/DefaultPromise$2
        at org.python.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:589)
        at org.python.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:397)
        at org.python.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:151)
        at java.lang.Thread.run(Thread.java:748)
Exception in thread "Jython-Netty-Client-3" java.lang.NoClassDefFoundError: org/python/netty/util/concurrent/DefaultPromise$2
        at org.python.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:589)
        at org.python.netty.util.concurrent.DefaultPromise.setSuccess(DefaultPromise.java:397)%

Modification:

Update to latest stable release.

Result:

Less noise during build.
2018-12-17 10:24:54 +01:00
Norman Maurer b6d6d98404
Skip tests that use KeyManagerFactory if not supported by OpenSSL version / flavor (#8662)
Motivation:

We missed to skip a few tests that depend on the KeyManagerFactory if the used OpenSSL version / flavor not support it.

Modifications:

Add missing overrides.

Result:

Testsuite also passes for example when using LibreSSL.
2018-12-14 21:33:38 +01:00
Norman Maurer a3844da10b
NoClassDefFoundError on Android platform when try to use DefaultDnsServerAddressStreamProvider. (#8656)
Motivation:

Andoid does not contain javax.naming.* so we should not try to use it to prevent a NoClassDefFoundError on init.

Modifications:

Only try to use javax.naming.* to retrieve nameservers when not using Android.

Result:

Fixes https://github.com/netty/netty/issues/8654.
2018-12-14 21:31:21 +01:00
Norman Maurer 29d185b796 Revert "Support 1012, 1013 and 1014 WebSocket status code"
This reverts commit db6d94f82a.
2018-12-14 18:24:30 +01:00
Stephane Landelle db6d94f82a Support 1012, 1013 and 1014 WebSocket status code
Motivation:

RFC 6455 doesn't define status codes 1012, 1013 and 1014.
Yet, since then, IANA has defined them, web browsers support them, applications in the wild do use them but it's currently not possible to buid a Netty based client for those services.

From https://www.iana.org/assignments/websocket/websocket.xhtml:

* 1012: Service Restart
* 1013: Try Again Later
* 1014: The server was acting as a gateway or proxy and received an invalid response from the upstream server. This is similar to 502 HTTP Status Code.

Modification:

Make status codes 1012, 1013 and 1014 legit.

Result:

WebSocket status codes as defined by IANA are supported.
2018-12-14 14:08:03 +01:00
Norman Maurer 83ab4ef5e3
Explict always call ctx.read() when AUTO_READ is false and HTTP/2 is used. (#8647)
Motivation:

We should always call ctx.read() even when AUTO_READ is false as flow-control is enforced by the HTTP/2 protocol.

See also https://tools.ietf.org/html/rfc7540#section-5.2.2.

We already did this before but not explicit and only did so because of some implementation details of ByteToMessageDecoder. It's better to be explicit here to not risk of breakage later on.

Modifications:

- Ensure we always call ctx.read() when AUTO_READ is false
- Add unit test.

Result:

No risk of staling the connection when HTTP/2 is used.
2018-12-13 19:02:20 +01:00
Feri73 d17bd5e160 Adding support for whitespace in resource path in tests (#8606)
Motivation:

In windows if the project is in a path that contains whitespace,
resources cannot be accessed and tests fail.

Modifications:

Adds ResourcesUtil.java in netty-common. Tests use ResourcesUtil.java to access a resource.

Result:

Being able to build netty in a path containing whitespace
2018-12-12 10:29:02 +01:00
Norman Maurer 1dacd37989
SSLSession.putValue / getValue / removeValue / getValueNames must be thread-safe. (#8648)
Motivation:

SSLSession.putValue / getValue / removeValue / getValueNames must be thread-safe as it may be called from multiple threads. This is also the case in the OpenJDK implementation.

Modifications:

Guard with synchronized (this) blocks to keep the memory overhead low as we do not expect to have these called frequently.

Result:

SSLSession implementation is thread-safe.
2018-12-12 07:41:26 +01:00
Paul Verest 25216be118 ReadTimeoutHandler - missing ) within JavaDoc example (#8645)
Motivation:

improve docs

Modification:

ReadTimeoutHandler - missing ) within JavaDoc example

No logic/unit tests affected
2018-12-10 20:50:38 +01:00
Norman Maurer bdcad8ef47
Fix incorrect assert in Http2MultiplexCodec caused by 9f9aa1a. (#8639)
Motivation:

9f9aa1a did some changes related to fixing how we handle ctx.read() in child channel but did incorrectly change some assert.

Modifications:

Fix assert to be correct.

Result:

Code does not throw an AssertionError due incorrect assert check.
2018-12-07 21:00:47 +01:00
Norman Maurer 36c12a4c55
Fix typo in MessageToMessageDecoder api docs. (#8638)
Motivation:

We had some typo (most likely caused by copy-and-paste) in the api docs which should be fixed.

Modifications:

Replace encoder by decoder word.

Result:

Correct apidocs.
2018-12-07 20:45:26 +01:00
Norman Maurer a564b70d51
More correct fix for using ChannelInitializer with custom EventExecutor. (#8633)
Motivation:

8331248671 did make some changes to fix a race in ChannelInitializer when using with a custom EventExecutor. Unfortunally these where a bit racy and so the testcase failed sometimes.

Modifications:

- More correct fix when using a custom EventExecutor
- Adjust the testcase to be more correct.

Result:

Proper fix for https://github.com/netty/netty/issues/8616.
2018-12-07 19:12:06 +01:00
多巴胺 22b2c4c3b8 Fix concurrency problem in UniqueIpFilter (#8635)
Motivation:

If two requests from the same IP are reached at the same time, `connected.contains(remoteIp)` may return false in both threads.

Modifications:

Check if there is already a connection with the same IP using return values.

Result:

Become thread safe.
2018-12-07 13:50:00 +01:00
tomer doron 2b651eb1a2 support publishing snapshots from docker based ci (#8634)
motivation: automate snapshot publishing from docker based ci

changes:
* add local settings.xml with env variables for publishing to sonatype-nexus-snapshots
* pipe UID/PWD env variable in docker compose
2018-12-07 05:43:06 +01:00
Norman Maurer 51a650979f
Skip test on windows as the semantics we expect are only true on Linux / Unix / BSD / MacOS (#8629)
Motivation:

In the test we assume some semantics on how RST is done that are not true for Windows so we should skip it.

Modifications:

Skip test when on windows.

Result:

Be able to run testsuite on windows. Fixes https://github.com/netty/netty/issues/8571.
2018-12-06 20:43:40 +01:00
Feri73 5df235c083 Correcting Maven Dependencies (#8622)
Motivation:

Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.

Modifications:

For all maven modules, add all of their dependencies to pom.xml

Result:

All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
2018-12-06 09:01:14 +01:00
Norman Maurer 8331248671
ChannelInitializer may be invoked multiple times when used with custom EventExecutor. (#8620)
Motivation:

The ChannelInitializer may be invoked multipled times when used with a custom EventExecutor as removal operation may be done asynchronously. We need to guard against this.

Modifications:

- Change Map to Set which is more correct in terms of how we use it.
- Ensure we only modify the internal Set when the handler was removed yet
- Add unit test.

Result:

Fixes https://github.com/netty/netty/issues/8616.
2018-12-05 19:30:17 +01:00
Norman Maurer 6739755d39
NioEventLoop.register(...) should offload to the EventLoop if not alr… (#8612)
Motivation:

java.nio.channels.spi.AbstractSelectableChannel.register(...) need to obtain multiple locks during execution which may produce a long wait time if we currently select. This lead to multiple CI failures in the past.

Modifications:

Ensure the register call takes place on the EventLoop.

Result:

No more flacky CI test timeouts.
2018-12-05 15:31:21 +01:00
Norman Maurer 9f9aa1ae01
Respect ctx.read() calls while processing reads for the child channels when using the Http2MultiplexCodec. (#8617)
Motivation:

We did not correct respect ctx.read() calls while processing a read for a child Channel. This could lead to read stales when auto read is disabled and no other read was requested.

Modifications:

- Keep track of extra read() calls while processing reads
- Add unit tests that verify that read() is respected when triggered either in channelRead(...) or channelReadComplete(...)

Result:

Fixes https://github.com/netty/netty/issues/8209.
2018-12-05 15:29:33 +01:00
Nick Travers d0d30f1fbe Loosen bounds check on CompositeByteBuf's maxNumComponents (#8621)
Motivation:

In versions of Netty prior to 4.1.31.Final, a CompositeByteBuf could be
created with any size (including potentially nonsensical negative
values). This behavior changed in e7737b993, which introduced a bounds
check to only allow for a component size greater than one. This broke
some existing use cases that attempted to create a byte buf with a
single component.

Modifications:

Lower the bounds check on numComponents to include the single component
case, but still throw an exception for anything less than one.

Add unit tests for the case of numComponents being less than, equal to,
and greater than this lower bound.

Result:

Return to the behavior of 4.1.30.Final, allowing one component, but
still include an explicit check against a lower bound.

Note that while creating a CompositeByteBuf with a single component is
in some ways a contradiction of the term "composite", this patch caters
for existing uses while excluding the clearly nonsensical case of asking
for a CompositeByteBuf with zero or fewer components.

Fixes #8613.
2018-12-05 08:42:23 +01:00
Francesco Nigro b8a3394f9b Adding an execute burst cost benchmark for Netty executors (#8594)
Motivation:

Netty executors doesn't have yet any means to compare with each others
nor to compare with the j.u.c. executors

Modifications:

A new benchmark measuring execute burst cost is being added

Result:

It's now possible to compare some of Netty executors with each others
and with the j.u.c. executors
2018-12-04 15:46:25 +01:00
Norman Maurer 2680357423
Provide a way to cache the internal nioBuffer of the PooledByteBuffer… (#8603)
Motivation:

Often a temporary ByteBuffer is used which can be cached to reduce the GC pressure.

Modifications:

Cache the ByteBuffer in the PoolThreadCache as well.

Result:

Less GC.
2018-12-04 15:26:05 +01:00
Norman Maurer dcbd7c492b
Update to OpenJDK 12 ea22 (#8618)
Motivation:

We should use the latest OpenJDK 12 release when running tests against Java12.

Modifications:

- Update to OpenJDK 12 ea22.
- Update pax exam version
- skip OSGI testsuite on Java12 as it does not work ea22 yet.

Result:

Use latest OpenJDK 12 version when running on the CI.
2018-12-04 11:59:10 +01:00
Julien Hoarau d05666ae2d Set-Cookie headers should not be combined (#8611)
Motivation:

According to the HTTP spec set-cookie headers should not be combined
because they are not using the list syntax.

Modifications:

Do not combine set-cookie headers.

Result:

Set-Cookie headers won't be combined anymore
2018-12-01 10:47:18 +01:00
Nick Hill a0c3081d82 Reduce http2 buffer slicing (#8598)
Motivation

DefaultHttp2FrameReader currently does a fair amount of "intermediate"
slicing which can be avoided.

Modifications

Avoid slicing the input buffer in DefaultHttp2FrameReader until
necessary. In one instance this also means retainedSlice can be used
instead (which may also avoid allocating).

Results

Less allocations when using http2.
2018-11-29 19:45:52 +01:00
root 8eb313072e [maven-release-plugin] prepare for next development iteration 2018-11-29 11:15:09 +00:00
root afcb4a37d3 [maven-release-plugin] prepare release netty-4.1.32.Final 2018-11-29 11:14:20 +00:00
Nick Hill fedf3ccecb Harden ref-counting concurrency semantics (#8583)
Motivation

#8563 highlighted race conditions introduced by the prior optimistic
update optimization in 83a19d5650. These
were known at the time but considered acceptable given the perf
benefit in high contention scenarios.

This PR proposes a modified approach which provides roughly half the
gains but stronger concurrency semantics. Race conditions still exist
but their scope is narrowed to much less likely cases (releases
coinciding with retain overflow), and even in those
cases certain guarantees are still assured. Once release() returns true,
all subsequent release/retains are guaranteed to throw, and in
particular deallocate will be called at most once.

Modifications

- Use even numbers internally (including -ve) for live refcounts
- "Final" release changes to odd number (equivalent to refcount 0)
- Retain still uses faster getAndAdd, release uses CAS loop
- First CAS attempt uses non-volatile read
- Thread.yield() after a failed CAS provides a net gain

Result

More (though not completely) robust concurrency semantics for ref
counting; increased latency under high contention, but still roughly
twice as fast as the original logic. Bench results to follow
2018-11-29 08:32:32 +01:00
Norman Maurer 057c19f92a
Move less common code-path to extra method to allow inlining of writeUtf8. (#8600)
Motivation:

ByteBuf is used everywhere so we should try hard to be able to make things inlinable. During benchmarks it showed that writeCharSequence(...) fails to inline writeUtf8 because it is too big even if its hots.

Modifications:

Move less common code-path to extra method to allow inlining.

Result:

Be able to inline writeUtf8 in most cases.
2018-11-27 21:03:35 +01:00
Norman Maurer 15e4fe05a8 Revert "Provide a way to cache the internal nioBuffer of the PooledByteBuffer to reduce GC. (#8593)"
This reverts commit 8cd005ba43 as it seems to produce some failures in some cases. This needs more research.
2018-11-27 20:02:34 +01:00
Norman Maurer 8cd005ba43
Provide a way to cache the internal nioBuffer of the PooledByteBuffer to reduce GC. (#8593)
Motivation:

Often a temporary ByteBuffer is used which can be cached to reduce the GC pressure.

Modifications:

Add a Deque per PoolChunk which will be used for caching.

Result:

Less GC.
2018-11-27 13:55:13 +01:00
Rolandz 89639ce322 Fix offset calculation in PooledByteBufAllocator when used
Motivation:

When we create new chunk with memory aligned, the offset of direct memory should be
'alignment - address & (alignment - 1)', not just 'address & (alignment - 1)'.

Modification:

Change offset calculating formula to offset = alignment - address & (alignment - 1) in PoolArena.DirectArena#offsetCacheLine and add a unit test to assert that.

Result:

Correctly calculate offset.
2018-11-27 11:47:34 +01:00
Norman Maurer f4e4147df8
LocationAwareSlf4jLogger does not correctly format log message. (#8595)
Motivation:

We did miss to use MessageFormatter inside LocationAwareSlf4jLogger and so {} was not correctly replaced in log messages when using slf4j.
This regression was introduced by afe0767e9c.

Modifications:

- Make use of MessageFormatter
- Add unit test.

Result:

Fixes https://github.com/netty/netty/issues/8483.
2018-11-27 11:44:27 +01:00
Norman Maurer 2278991db7
Use addAndGet(...) as a replacement for compareAndSet(...) when tracking the direct memory usage. (#8596)
Motivation:

We can change from using compareAndSet to addAndGet, which emits a different CPU instruction on x86 (CMPXCHG to XADD) when count direct memory usage. This instruction is cheaper in general and so produce less overhead on the "happy path". If we detect too much memory usage we just rollback the change before throwing the Error.

Modifications:

Replace compareAndSet(...) with addAndGet(...)

Result:

Less overhead when tracking direct memory.
2018-11-27 08:33:28 +01:00
Norman Maurer af63626777
Factor out less common code-path into own method to allow inlining. (#8590)
Motivation:

During benchmarks two methods showed up as "hot method too big". We can easily make these smaller by factor out some less common code-path to an extra method and so allow inlining.

Modifications:

Factor out less common code path to an extra method.

Result:

Hot methods can be inlined.
2018-11-25 21:46:14 +01:00
Norman Maurer af34287fd1
HeadContext is inbound and outbound (#8592)
Motivation:

Our HeadContext in DefaultChannelPipeline does handle inbound and outbound but we only marked it as outbound. While this does not have any effect in the current code-base it can lead to problems when we change our internals (this is also how I found the bug).

Modifications:

Construct HeadContext so it is also marked as handling inbound.

Result:

More correct code.
2018-11-24 10:47:56 +01:00
Norman Maurer 2a2bc21067
Remove @Deprecated from package-info.java file (#8591)
Motivation:

31fd66b617 added @Deprecated to some classes but also to the package-info.java files. IntelliJ does not like to have these annotations on package-info.java

Modifications:

Remove annotation from package-info.java

Result:

Be able to compile against via IntelliJ
2018-11-23 17:03:29 +01:00
Norman Maurer 31fd66b617
Mark OIO based transports as deprecated as preparation for removal in Netty 5. (#8579)
Motivation:

We plan to remove the OIO based transports in Netty 5 so we should mark these as deprecated already.

Modifications:

Mark all OIO based transports as deprecated.

Result:

Give the user a heads-up for removal.
2018-11-21 15:15:01 +01:00
Norman Maurer d728a72e74
Combine flushes in DnsNameResolver to allow usage of sendmmsg to reduce syscall costs (#8470)
Motivation:

Some of transports support gathering writes when using datagrams. For example this is the case for EpollDatagramChannel. We should minimize the calls to flush() to allow making efficient usage of sendmmsg in this case.

Modifications:

- minimize flush() operations when we query for multiple address types.
- reduce GC by always directly schedule doResolveAll0(...) on the EventLoop.

Result:

Be able to use sendmmsg internally in the DnsNameResolver.
2018-11-21 06:42:40 +01:00
Norman Maurer 3d2fdc459c
Remove transitive dependency on slf4j in example (#8582)
Motivation:

We currently depend on slf4j in an transitive way in one of our classes in the examples. We should not do this.

Modifications:

Remove logging in example.

Result:

Remove not needed dependency.
2018-11-21 06:39:28 +01:00
Norman Maurer cd689ee775
Fix javadoc to correctly explain how ChannelDuplexHandler.deregister(...) works. (#8577)
Motivation:

We had an error in the javadoc which was most likely caused by copy and paste.

Modifications:

Fix javadoc.

Result:

Correct javadoc.
2018-11-20 16:45:15 +01:00
735 changed files with 33216 additions and 9632 deletions

View File

@ -1 +1 @@
Please review the [guidelines for contributing](http://netty.io/wiki/developer-guide.html) for this repository.
Please review the [guidelines for contributing](https://netty.io/wiki/developer-guide.html) for this repository.

4
.gitignore vendored
View File

@ -37,3 +37,7 @@ dependency-reduced-pom.xml
# exclude mainframer files
mainframer
.mainframer
# exclude docker-sync stuff
.docker-sync
*/.docker-sync

9
.mvn/settings.xml Normal file
View File

@ -0,0 +1,9 @@
<settings>
<servers>
<server>
<id>sonatype-nexus-snapshots</id>
<username>${env.SANOTYPE_USER}</username>
<password>${env.SANOTYPE_PASSWORD}</password>
</server>
</servers>
</settings>

View File

@ -42,5 +42,5 @@ My system has IPv6 disabled.
## How to contribute your work
Before submitting a pull request or push a commit, please read [our developer guide](http://netty.io/wiki/developer-guide.html).
Before submitting a pull request or push a commit, please read [our developer guide](https://netty.io/wiki/developer-guide.html).

View File

@ -4,7 +4,7 @@
Please visit the Netty web site for more information:
* http://netty.io/
* https://netty.io/
Copyright 2014 The Netty Project
@ -162,9 +162,9 @@ This product optionally depends on 'JBoss Marshalling', an alternative Java
serialization API, which can be obtained at:
* LICENSE:
* license/LICENSE.jboss-marshalling.txt (GNU LGPL 2.1)
* license/LICENSE.jboss-marshalling.txt (Apache License 2.0)
* HOMEPAGE:
* http://www.jboss.org/jbossmarshalling
* https://github.com/jboss-remoting/jboss-marshalling
This product optionally depends on 'Caliper', Google's micro-
benchmarking framework, which can be obtained at:
@ -205,6 +205,22 @@ the HTTP/2 HPACK algorithm written by Twitter. It can be obtained at:
* license/LICENSE.hpack.txt (Apache License 2.0)
* HOMEPAGE:
* https://github.com/twitter/hpack
This product contains a modified version of 'HPACK', a Java implementation of
the HTTP/2 HPACK algorithm written by Cory Benfield. It can be obtained at:
* LICENSE:
* license/LICENSE.hyper-hpack.txt (MIT License)
* HOMEPAGE:
* https://github.com/python-hyper/hpack/
This product contains a modified version of 'HPACK', a Java implementation of
the HTTP/2 HPACK algorithm written by Tatsuhiro Tsujikawa. It can be obtained at:
* LICENSE:
* license/LICENSE.nghttp2-hpack.txt (MIT License)
* HOMEPAGE:
* https://github.com/nghttp2/nghttp2/
This product contains a modified portion of 'Apache Commons Lang', a Java library
provides utilities for the java.lang API, which can be obtained at:

View File

@ -4,20 +4,20 @@ Netty is an asynchronous event-driven network application framework for rapid de
## Links
* [Web Site](http://netty.io/)
* [Downloads](http://netty.io/downloads.html)
* [Documentation](http://netty.io/wiki/)
* [Web Site](https://netty.io/)
* [Downloads](https://netty.io/downloads.html)
* [Documentation](https://netty.io/wiki/)
* [@netty_project](https://twitter.com/netty_project)
## How to build
For the detailed information about building and developing Netty, please visit [the developer guide](http://netty.io/wiki/developer-guide.html). This page only gives very basic information.
For the detailed information about building and developing Netty, please visit [the developer guide](https://netty.io/wiki/developer-guide.html). This page only gives very basic information.
You require the following to build Netty:
* Latest stable [Oracle JDK 7](http://www.oracle.com/technetwork/java/)
* Latest stable [Apache Maven](http://maven.apache.org/)
* If you are on Linux, you need [additional development packages](http://netty.io/wiki/native-transports.html) installed on your system, because you'll build the native transport.
* If you are on Linux, you need [additional development packages](https://netty.io/wiki/native-transports.html) installed on your system, because you'll build the native transport.
Note that this is build-time requirement. JDK 5 (for 3.x) or 6 (for 4.0+) is enough to run your Netty-based application.

View File

@ -20,7 +20,7 @@
<parent>
<groupId>io.netty</groupId>
<artifactId>netty-parent</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</parent>
<artifactId>netty-all</artifactId>
@ -31,6 +31,7 @@
<properties>
<generatedSourceDir>${project.build.directory}/src</generatedSourceDir>
<dependencyVersionsDir>${project.build.directory}/versions</dependencyVersionsDir>
<skipJapicmp>true</skipJapicmp>
</properties>
<profiles>
@ -111,6 +112,14 @@
<scope>compile</scope>
<optional>true</optional>
</dependency>
<!-- Just include the classes for the other platform so these are at least present in the netty-all artifact -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport-native-kqueue</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
<optional>true</optional>
</dependency>
</dependencies>
</profile>
<!-- The mac, openbsd and freebsd profile will only include the native jar for epol to the all jar.
@ -133,6 +142,14 @@
<scope>compile</scope>
<optional>true</optional>
</dependency>
<!-- Just include the classes for the other platform so these are at least present in the netty-all artifact -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
<optional>true</optional>
</dependency>
</dependencies>
</profile>
<profile>
@ -153,6 +170,14 @@
<scope>compile</scope>
<optional>true</optional>
</dependency>
<!-- Just include the classes for the other platform so these are at least present in the netty-all artifact -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
<optional>true</optional>
</dependency>
</dependencies>
</profile>
<profile>
@ -173,6 +198,14 @@
<scope>compile</scope>
<optional>true</optional>
</dependency>
<!-- Just include the classes for the other platform so these are at least present in the netty-all artifact -->
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>${project.version}</version>
<scope>compile</scope>
<optional>true</optional>
</dependency>
</dependencies>
</profile>

View File

@ -25,16 +25,16 @@
<groupId>io.netty</groupId>
<artifactId>netty-bom</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
<packaging>pom</packaging>
<name>Netty/BOM</name>
<description>Netty (Bill of Materials)</description>
<url>http://netty.io/</url>
<url>https://netty.io/</url>
<organization>
<name>The Netty Project</name>
<url>http://netty.io/</url>
<url>https://netty.io/</url>
</organization>
<licenses>
@ -57,9 +57,9 @@
<id>netty.io</id>
<name>The Netty Project Contributors</name>
<email>netty@googlegroups.com</email>
<url>http://netty.io/</url>
<url>https://netty.io/</url>
<organization>The Netty Project</organization>
<organizationUrl>http://netty.io/</organizationUrl>
<organizationUrl>https://netty.io/</organizationUrl>
</developer>
</developers>
@ -69,165 +69,165 @@
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-buffer</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-dns</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-haproxy</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-http</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-http2</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-memcache</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-mqtt</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-redis</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-smtp</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-socks</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-stomp</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-codec-xml</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-common</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-dev-tools</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-handler</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-handler-proxy</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-resolver</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-resolver-dns</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-rxtx</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-sctp</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-udt</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-example</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-unix-common</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-unix-common</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
<classifier>linux-x86_64</classifier>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-unix-common</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
<classifier>osx-x86_64</classifier>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-epoll</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
<classifier>linux-x86_64</classifier>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-kqueue</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</dependency>
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-transport-native-kqueue</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
<classifier>osx-x86_64</classifier>
</dependency>
</dependencies>

View File

@ -20,7 +20,7 @@
<parent>
<groupId>io.netty</groupId>
<artifactId>netty-parent</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</parent>
<artifactId>netty-buffer</artifactId>

View File

@ -38,6 +38,7 @@ import java.nio.channels.ScatteringByteChannel;
import java.nio.charset.Charset;
import static io.netty.util.internal.MathUtil.isOutOfBounds;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
/**
* A skeletal implementation of a buffer.
@ -73,9 +74,7 @@ public abstract class AbstractByteBuf extends ByteBuf {
private int maxCapacity;
protected AbstractByteBuf(int maxCapacity) {
if (maxCapacity < 0) {
throw new IllegalArgumentException("maxCapacity: " + maxCapacity + " (expected: >= 0)");
}
checkPositiveOrZero(maxCapacity, "maxCapacity");
this.maxCapacity = maxCapacity;
}
@ -271,10 +270,7 @@ public abstract class AbstractByteBuf extends ByteBuf {
@Override
public ByteBuf ensureWritable(int minWritableBytes) {
if (minWritableBytes < 0) {
throw new IllegalArgumentException(String.format(
"minWritableBytes: %d (expected: >= 0)", minWritableBytes));
}
checkPositiveOrZero(minWritableBytes, "minWritableBytes");
ensureWritable0(minWritableBytes);
return this;
}
@ -284,6 +280,7 @@ public abstract class AbstractByteBuf extends ByteBuf {
if (minWritableBytes <= writableBytes()) {
return;
}
final int writerIndex = writerIndex();
if (checkBounds) {
if (minWritableBytes > maxCapacity - writerIndex) {
throw new IndexOutOfBoundsException(String.format(
@ -293,7 +290,14 @@ public abstract class AbstractByteBuf extends ByteBuf {
}
// Normalize the current capacity to the power of 2.
int newCapacity = alloc().calculateNewCapacity(writerIndex + minWritableBytes, maxCapacity);
int minNewCapacity = writerIndex + minWritableBytes;
int newCapacity = alloc().calculateNewCapacity(minNewCapacity, maxCapacity);
int fastCapacity = writerIndex + maxFastWritableBytes();
// Grow by a smaller amount if it will avoid reallocation
if (newCapacity > fastCapacity && minNewCapacity <= fastCapacity) {
newCapacity = fastCapacity;
}
// Adjust to the new capacity.
capacity(newCapacity);
@ -302,10 +306,7 @@ public abstract class AbstractByteBuf extends ByteBuf {
@Override
public int ensureWritable(int minWritableBytes, boolean force) {
ensureAccessible();
if (minWritableBytes < 0) {
throw new IllegalArgumentException(String.format(
"minWritableBytes: %d (expected: >= 0)", minWritableBytes));
}
checkPositiveOrZero(minWritableBytes, "minWritableBytes");
if (minWritableBytes <= writableBytes()) {
return 0;
@ -323,7 +324,14 @@ public abstract class AbstractByteBuf extends ByteBuf {
}
// Normalize the current capacity to the power of 2.
int newCapacity = alloc().calculateNewCapacity(writerIndex + minWritableBytes, maxCapacity);
int minNewCapacity = writerIndex + minWritableBytes;
int newCapacity = alloc().calculateNewCapacity(minNewCapacity, maxCapacity);
int fastCapacity = writerIndex + maxFastWritableBytes();
// Grow by a smaller amount if it will avoid reallocation
if (newCapacity > fastCapacity && minNewCapacity <= fastCapacity) {
newCapacity = fastCapacity;
}
// Adjust to the new capacity.
capacity(newCapacity);
@ -1381,30 +1389,38 @@ public abstract class AbstractByteBuf extends ByteBuf {
checkIndex0(index, fieldLength);
}
private static void checkRangeBounds(final int index, final int fieldLength, final int capacity) {
private static void checkRangeBounds(final String indexName, final int index,
final int fieldLength, final int capacity) {
if (isOutOfBounds(index, fieldLength, capacity)) {
throw new IndexOutOfBoundsException(String.format(
"index: %d, length: %d (expected: range(0, %d))", index, fieldLength, capacity));
"%s: %d, length: %d (expected: range(0, %d))", indexName, index, fieldLength, capacity));
}
}
final void checkIndex0(int index, int fieldLength) {
if (checkBounds) {
checkRangeBounds(index, fieldLength, capacity());
checkRangeBounds("index", index, fieldLength, capacity());
}
}
protected final void checkSrcIndex(int index, int length, int srcIndex, int srcCapacity) {
checkIndex(index, length);
if (checkBounds) {
checkRangeBounds(srcIndex, length, srcCapacity);
checkRangeBounds("srcIndex", srcIndex, length, srcCapacity);
}
}
protected final void checkDstIndex(int index, int length, int dstIndex, int dstCapacity) {
checkIndex(index, length);
if (checkBounds) {
checkRangeBounds(dstIndex, length, dstCapacity);
checkRangeBounds("dstIndex", dstIndex, length, dstCapacity);
}
}
protected final void checkDstIndex(int length, int dstIndex, int dstCapacity) {
checkReadableBytes(length);
if (checkBounds) {
checkRangeBounds("dstIndex", dstIndex, length, dstCapacity);
}
}
@ -1414,9 +1430,7 @@ public abstract class AbstractByteBuf extends ByteBuf {
* than the specified value.
*/
protected final void checkReadableBytes(int minimumReadableBytes) {
if (minimumReadableBytes < 0) {
throw new IllegalArgumentException("minimumReadableBytes: " + minimumReadableBytes + " (expected: >= 0)");
}
checkPositiveOrZero(minimumReadableBytes, "minimumReadableBytes");
checkReadableBytes0(minimumReadableBytes);
}
@ -1446,19 +1460,11 @@ public abstract class AbstractByteBuf extends ByteBuf {
* if the buffer was released before.
*/
protected final void ensureAccessible() {
if (checkAccessible && internalRefCnt() == 0) {
if (checkAccessible && !isAccessible()) {
throw new IllegalReferenceCountException(0);
}
}
/**
* Returns the reference count that is used internally by {@link #ensureAccessible()} to try to guard
* against using the buffer after it was released (best-effort).
*/
int internalRefCnt() {
return refCnt();
}
final void setIndex0(int readerIndex, int writerIndex) {
this.readerIndex = readerIndex;
this.writerIndex = writerIndex;

View File

@ -16,6 +16,8 @@
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.util.ResourceLeakDetector;
import io.netty.util.ResourceLeakTracker;
import io.netty.util.internal.PlatformDependent;
@ -125,7 +127,7 @@ public abstract class AbstractByteBufAllocator implements ByteBufAllocator {
@Override
public ByteBuf ioBuffer() {
if (PlatformDependent.hasUnsafe()) {
if (PlatformDependent.hasUnsafe() || isDirectBufferPooled()) {
return directBuffer(DEFAULT_INITIAL_CAPACITY);
}
return heapBuffer(DEFAULT_INITIAL_CAPACITY);
@ -133,7 +135,7 @@ public abstract class AbstractByteBufAllocator implements ByteBufAllocator {
@Override
public ByteBuf ioBuffer(int initialCapacity) {
if (PlatformDependent.hasUnsafe()) {
if (PlatformDependent.hasUnsafe() || isDirectBufferPooled()) {
return directBuffer(initialCapacity);
}
return heapBuffer(initialCapacity);
@ -141,7 +143,7 @@ public abstract class AbstractByteBufAllocator implements ByteBufAllocator {
@Override
public ByteBuf ioBuffer(int initialCapacity, int maxCapacity) {
if (PlatformDependent.hasUnsafe()) {
if (PlatformDependent.hasUnsafe() || isDirectBufferPooled()) {
return directBuffer(initialCapacity, maxCapacity);
}
return heapBuffer(initialCapacity, maxCapacity);
@ -222,9 +224,7 @@ public abstract class AbstractByteBufAllocator implements ByteBufAllocator {
}
private static void validate(int initialCapacity, int maxCapacity) {
if (initialCapacity < 0) {
throw new IllegalArgumentException("initialCapacity: " + initialCapacity + " (expected: 0+)");
}
checkPositiveOrZero(initialCapacity, "initialCapacity");
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity: %d (expected: not greater than maxCapacity(%d)",
@ -249,9 +249,7 @@ public abstract class AbstractByteBufAllocator implements ByteBufAllocator {
@Override
public int calculateNewCapacity(int minNewCapacity, int maxCapacity) {
if (minNewCapacity < 0) {
throw new IllegalArgumentException("minNewCapacity: " + minNewCapacity + " (expected: 0+)");
}
checkPositiveOrZero(minNewCapacity, "minNewCapacity");
if (minNewCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"minNewCapacity: %d (expected: not greater than maxCapacity(%d)",

View File

@ -31,6 +31,11 @@ public abstract class AbstractDerivedByteBuf extends AbstractByteBuf {
super(maxCapacity);
}
@Override
final boolean isAccessible() {
return unwrap().isAccessible();
}
@Override
public final int refCnt() {
return refCnt0();

View File

@ -63,7 +63,7 @@ abstract class AbstractPooledDerivedByteBuf extends AbstractReferenceCountedByte
try {
maxCapacity(maxCapacity);
setIndex0(readerIndex, writerIndex); // It is assumed the bounds checking is done by the caller.
setRefCnt(1);
resetRefCnt();
@SuppressWarnings("unchecked")
final U castThis = (U) this;

View File

@ -16,80 +16,73 @@
package io.netty.buffer;
import io.netty.util.IllegalReferenceCountException;
import io.netty.util.internal.PlatformDependent;
import java.util.concurrent.atomic.AtomicIntegerFieldUpdater;
import static io.netty.util.internal.ObjectUtil.checkPositive;
import io.netty.util.internal.ReferenceCountUpdater;
/**
* Abstract base class for {@link ByteBuf} implementations that count references.
*/
public abstract class AbstractReferenceCountedByteBuf extends AbstractByteBuf {
private static final long REFCNT_FIELD_OFFSET;
private static final AtomicIntegerFieldUpdater<AbstractReferenceCountedByteBuf> refCntUpdater =
private static final long REFCNT_FIELD_OFFSET =
ReferenceCountUpdater.getUnsafeOffset(AbstractReferenceCountedByteBuf.class, "refCnt");
private static final AtomicIntegerFieldUpdater<AbstractReferenceCountedByteBuf> AIF_UPDATER =
AtomicIntegerFieldUpdater.newUpdater(AbstractReferenceCountedByteBuf.class, "refCnt");
private volatile int refCnt = 1;
static {
long refCntFieldOffset = -1;
try {
if (PlatformDependent.hasUnsafe()) {
refCntFieldOffset = PlatformDependent.objectFieldOffset(
AbstractReferenceCountedByteBuf.class.getDeclaredField("refCnt"));
}
} catch (Throwable ignore) {
refCntFieldOffset = -1;
private static final ReferenceCountUpdater<AbstractReferenceCountedByteBuf> updater =
new ReferenceCountUpdater<AbstractReferenceCountedByteBuf>() {
@Override
protected AtomicIntegerFieldUpdater<AbstractReferenceCountedByteBuf> updater() {
return AIF_UPDATER;
}
@Override
protected long unsafeOffset() {
return REFCNT_FIELD_OFFSET;
}
};
REFCNT_FIELD_OFFSET = refCntFieldOffset;
}
// Value might not equal "real" reference count, all access should be via the updater
@SuppressWarnings("unused")
private volatile int refCnt = updater.initialValue();
protected AbstractReferenceCountedByteBuf(int maxCapacity) {
super(maxCapacity);
}
@Override
int internalRefCnt() {
boolean isAccessible() {
// Try to do non-volatile read for performance as the ensureAccessible() is racy anyway and only provide
// a best-effort guard.
//
// TODO: Once we compile against later versions of Java we can replace the Unsafe usage here by varhandles.
return REFCNT_FIELD_OFFSET != -1 ? PlatformDependent.getInt(this, REFCNT_FIELD_OFFSET) : refCnt();
return updater.isLiveNonVolatile(this);
}
@Override
public int refCnt() {
return refCnt;
return updater.refCnt(this);
}
/**
* An unsafe operation intended for use by a subclass that sets the reference count of the buffer directly
*/
protected final void setRefCnt(int refCnt) {
refCntUpdater.set(this, refCnt);
updater.setRefCnt(this, refCnt);
}
/**
* An unsafe operation intended for use by a subclass that resets the reference count of the buffer to 1
*/
protected final void resetRefCnt() {
updater.resetRefCnt(this);
}
@Override
public ByteBuf retain() {
return retain0(1);
return updater.retain(this);
}
@Override
public ByteBuf retain(int increment) {
return retain0(checkPositive(increment, "increment"));
}
private ByteBuf retain0(final int increment) {
int oldRef = refCntUpdater.getAndAdd(this, increment);
if (oldRef <= 0 || oldRef + increment < oldRef) {
// Ensure we don't resurrect (which means the refCnt was 0) and also that we encountered an overflow.
refCntUpdater.getAndAdd(this, -increment);
throw new IllegalReferenceCountException(oldRef, increment);
}
return this;
return updater.retain(this, increment);
}
@Override
@ -104,27 +97,21 @@ public abstract class AbstractReferenceCountedByteBuf extends AbstractByteBuf {
@Override
public boolean release() {
return release0(1);
return handleRelease(updater.release(this));
}
@Override
public boolean release(int decrement) {
return release0(checkPositive(decrement, "decrement"));
return handleRelease(updater.release(this, decrement));
}
private boolean release0(int decrement) {
int oldRef = refCntUpdater.getAndAdd(this, -decrement);
if (oldRef == decrement) {
private boolean handleRelease(boolean result) {
if (result) {
deallocate();
return true;
}
if (oldRef < decrement || oldRef - decrement > oldRef) {
// Ensure we don't over-release, and avoid underflow.
refCntUpdater.getAndAdd(this, decrement);
throw new IllegalReferenceCountException(oldRef, -decrement);
}
return false;
return result;
}
/**
* Called once {@link #refCnt()} is equals 0.
*/

View File

@ -939,6 +939,12 @@ final class AdvancedLeakAwareCompositeByteBuf extends SimpleLeakAwareCompositeBy
return super.addComponent(increaseWriterIndex, cIndex, buffer);
}
@Override
public CompositeByteBuf addFlattenedComponents(boolean increaseWriterIndex, ByteBuf buffer) {
recordLeakNonRefCountingOperation(leak);
return super.addFlattenedComponents(increaseWriterIndex, buffer);
}
@Override
public CompositeByteBuf removeComponent(int cIndex) {
recordLeakNonRefCountingOperation(leak);

View File

@ -245,7 +245,6 @@ import java.nio.charset.UnsupportedCharsetException;
* Please refer to {@link ByteBufInputStream} and
* {@link ByteBufOutputStream}.
*/
@SuppressWarnings("ClassMayBeInterface")
public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
/**
@ -258,14 +257,14 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* capacity, the content of this buffer is truncated. If the {@code newCapacity} is greater
* than the current capacity, the buffer is appended with unspecified data whose length is
* {@code (newCapacity - currentCapacity)}.
*
* @throws IllegalArgumentException if the {@code newCapacity} is greater than {@link #maxCapacity()}
*/
public abstract ByteBuf capacity(int newCapacity);
/**
* Returns the maximum allowed capacity of this buffer. If a user attempts to increase the
* capacity of this buffer beyond the maximum capacity using {@link #capacity(int)} or
* {@link #ensureWritable(int)}, those methods will raise an
* {@link IllegalArgumentException}.
* Returns the maximum allowed capacity of this buffer. This value provides an upper
* bound on {@link #capacity()}.
*/
public abstract int maxCapacity();
@ -422,6 +421,15 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
*/
public abstract int maxWritableBytes();
/**
* Returns the maximum number of bytes which can be written for certain without involving
* an internal reallocation or data-copy. The returned value will be &ge; {@link #writableBytes()}
* and &le; {@link #maxWritableBytes()}.
*/
public int maxFastWritableBytes() {
return writableBytes();
}
/**
* Returns {@code true}
* if and only if {@code (this.writerIndex - this.readerIndex)} is greater
@ -513,22 +521,23 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
public abstract ByteBuf discardSomeReadBytes();
/**
* Makes sure the number of {@linkplain #writableBytes() the writable bytes}
* is equal to or greater than the specified value. If there is enough
* writable bytes in this buffer, this method returns with no side effect.
* Otherwise, it raises an {@link IllegalArgumentException}.
* Expands the buffer {@link #capacity()} to make sure the number of
* {@linkplain #writableBytes() writable bytes} is equal to or greater than the
* specified value. If there are enough writable bytes in this buffer, this method
* returns with no side effect.
*
* @param minWritableBytes
* the expected minimum number of writable bytes
* @throws IndexOutOfBoundsException
* if {@link #writerIndex()} + {@code minWritableBytes} &gt; {@link #maxCapacity()}
* if {@link #writerIndex()} + {@code minWritableBytes} &gt; {@link #maxCapacity()}.
* @see #capacity(int)
*/
public abstract ByteBuf ensureWritable(int minWritableBytes);
/**
* Tries to make sure the number of {@linkplain #writableBytes() the writable bytes}
* is equal to or greater than the specified value. Unlike {@link #ensureWritable(int)},
* this method does not raise an exception but returns a code.
* Expands the buffer {@link #capacity()} to make sure the number of
* {@linkplain #writableBytes() writable bytes} is equal to or greater than the
* specified value. Unlike {@link #ensureWritable(int)}, this method returns a status code.
*
* @param minWritableBytes
* the expected minimum number of writable bytes
@ -1756,9 +1765,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
/**
* Sets the specified boolean at the current {@code writerIndex}
* and increases the {@code writerIndex} by {@code 1} in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 1}
* If {@code this.writableBytes} is less than {@code 1}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeBoolean(boolean value);
@ -1766,9 +1774,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified byte at the current {@code writerIndex}
* and increases the {@code writerIndex} by {@code 1} in this buffer.
* The 24 high-order bits of the specified value are ignored.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 1}
* If {@code this.writableBytes} is less than {@code 1}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeByte(int value);
@ -1776,9 +1783,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 16-bit short integer at the current
* {@code writerIndex} and increases the {@code writerIndex} by {@code 2}
* in this buffer. The 16 high-order bits of the specified value are ignored.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 2}
* If {@code this.writableBytes} is less than {@code 2}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeShort(int value);
@ -1787,9 +1793,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Order at the current {@code writerIndex} and increases the
* {@code writerIndex} by {@code 2} in this buffer.
* The 16 high-order bits of the specified value are ignored.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 2}
* If {@code this.writableBytes} is less than {@code 2}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeShortLE(int value);
@ -1797,9 +1802,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 24-bit medium integer at the current
* {@code writerIndex} and increases the {@code writerIndex} by {@code 3}
* in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 3}
* If {@code this.writableBytes} is less than {@code 3}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeMedium(int value);
@ -1808,18 +1812,16 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* {@code writerIndex} in the Little Endian Byte Order and
* increases the {@code writerIndex} by {@code 3} in this
* buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 3}
* If {@code this.writableBytes} is less than {@code 3}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeMediumLE(int value);
/**
* Sets the specified 32-bit integer at the current {@code writerIndex}
* and increases the {@code writerIndex} by {@code 4} in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 4}
* If {@code this.writableBytes} is less than {@code 4}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeInt(int value);
@ -1827,9 +1829,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 32-bit integer at the current {@code writerIndex}
* in the Little Endian Byte Order and increases the {@code writerIndex}
* by {@code 4} in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 4}
* If {@code this.writableBytes} is less than {@code 4}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeIntLE(int value);
@ -1837,9 +1838,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 64-bit long integer at the current
* {@code writerIndex} and increases the {@code writerIndex} by {@code 8}
* in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 8}
* If {@code this.writableBytes} is less than {@code 8}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeLong(long value);
@ -1848,9 +1848,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* {@code writerIndex} in the Little Endian Byte Order and
* increases the {@code writerIndex} by {@code 8}
* in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 8}
* If {@code this.writableBytes} is less than {@code 8}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeLongLE(long value);
@ -1858,9 +1857,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 2-byte UTF-16 character at the current
* {@code writerIndex} and increases the {@code writerIndex} by {@code 2}
* in this buffer. The 16 high-order bits of the specified value are ignored.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 2}
* If {@code this.writableBytes} is less than {@code 2}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeChar(int value);
@ -1868,9 +1866,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 32-bit floating point number at the current
* {@code writerIndex} and increases the {@code writerIndex} by {@code 4}
* in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 4}
* If {@code this.writableBytes} is less than {@code 4}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeFloat(float value);
@ -1878,9 +1875,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 32-bit floating point number at the current
* {@code writerIndex} in Little Endian Byte Order and increases
* the {@code writerIndex} by {@code 4} in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 4}
* If {@code this.writableBytes} is less than {@code 4}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public ByteBuf writeFloatLE(float value) {
return writeIntLE(Float.floatToRawIntBits(value));
@ -1890,9 +1886,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 64-bit floating point number at the current
* {@code writerIndex} and increases the {@code writerIndex} by {@code 8}
* in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 8}
* If {@code this.writableBytes} is less than {@code 8}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeDouble(double value);
@ -1900,9 +1895,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Sets the specified 64-bit floating point number at the current
* {@code writerIndex} in Little Endian Byte Order and increases
* the {@code writerIndex} by {@code 8} in this buffer.
*
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is less than {@code 8}
* If {@code this.writableBytes} is less than {@code 8}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public ByteBuf writeDoubleLE(double value) {
return writeLongLE(Double.doubleToRawLongBits(value));
@ -1917,10 +1911,9 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* increases the {@code readerIndex} of the source buffer by the number of
* the transferred bytes while {@link #writeBytes(ByteBuf, int, int)}
* does not.
*
* @throws IndexOutOfBoundsException
* if {@code src.readableBytes} is greater than
* {@code this.writableBytes}
* If {@code this.writableBytes} is less than {@code src.readableBytes},
* {@link #ensureWritable(int)} will be called in an attempt to expand
* capacity to accommodate.
*/
public abstract ByteBuf writeBytes(ByteBuf src);
@ -1932,12 +1925,11 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* except that this method increases the {@code readerIndex} of the source
* buffer by the number of the transferred bytes (= {@code length}) while
* {@link #writeBytes(ByteBuf, int, int)} does not.
* If {@code this.writableBytes} is less than {@code length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*
* @param length the number of bytes to transfer
*
* @throws IndexOutOfBoundsException
* if {@code length} is greater than {@code this.writableBytes} or
* if {@code length} is greater then {@code src.readableBytes}
* @throws IndexOutOfBoundsException if {@code length} is greater then {@code src.readableBytes}
*/
public abstract ByteBuf writeBytes(ByteBuf src, int length);
@ -1945,15 +1937,15 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Transfers the specified source buffer's data to this buffer starting at
* the current {@code writerIndex} and increases the {@code writerIndex}
* by the number of the transferred bytes (= {@code length}).
* If {@code this.writableBytes} is less than {@code length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*
* @param srcIndex the first index of the source
* @param length the number of bytes to transfer
*
* @throws IndexOutOfBoundsException
* if the specified {@code srcIndex} is less than {@code 0},
* if {@code srcIndex + length} is greater than
* {@code src.capacity}, or
* if {@code length} is greater than {@code this.writableBytes}
* if the specified {@code srcIndex} is less than {@code 0}, or
* if {@code srcIndex + length} is greater than {@code src.capacity}
*/
public abstract ByteBuf writeBytes(ByteBuf src, int srcIndex, int length);
@ -1961,9 +1953,8 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Transfers the specified source array's data to this buffer starting at
* the current {@code writerIndex} and increases the {@code writerIndex}
* by the number of the transferred bytes (= {@code src.length}).
*
* @throws IndexOutOfBoundsException
* if {@code src.length} is greater than {@code this.writableBytes}
* If {@code this.writableBytes} is less than {@code src.length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*/
public abstract ByteBuf writeBytes(byte[] src);
@ -1971,15 +1962,15 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Transfers the specified source array's data to this buffer starting at
* the current {@code writerIndex} and increases the {@code writerIndex}
* by the number of the transferred bytes (= {@code length}).
* If {@code this.writableBytes} is less than {@code length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*
* @param srcIndex the first index of the source
* @param length the number of bytes to transfer
*
* @throws IndexOutOfBoundsException
* if the specified {@code srcIndex} is less than {@code 0},
* if {@code srcIndex + length} is greater than
* {@code src.length}, or
* if {@code length} is greater than {@code this.writableBytes}
* if the specified {@code srcIndex} is less than {@code 0}, or
* if {@code srcIndex + length} is greater than {@code src.length}
*/
public abstract ByteBuf writeBytes(byte[] src, int srcIndex, int length);
@ -1988,10 +1979,9 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* the current {@code writerIndex} until the source buffer's position
* reaches its limit, and increases the {@code writerIndex} by the
* number of the transferred bytes.
*
* @throws IndexOutOfBoundsException
* if {@code src.remaining()} is greater than
* {@code this.writableBytes}
* If {@code this.writableBytes} is less than {@code src.remaining()},
* {@link #ensureWritable(int)} will be called in an attempt to expand
* capacity to accommodate.
*/
public abstract ByteBuf writeBytes(ByteBuffer src);
@ -1999,29 +1989,28 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Transfers the content of the specified stream to this buffer
* starting at the current {@code writerIndex} and increases the
* {@code writerIndex} by the number of the transferred bytes.
* If {@code this.writableBytes} is less than {@code length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*
* @param length the number of bytes to transfer
*
* @return the actual number of bytes read in from the specified stream
*
* @throws IndexOutOfBoundsException
* if {@code length} is greater than {@code this.writableBytes}
* @throws IOException
* if the specified stream threw an exception during I/O
* @throws IOException if the specified stream threw an exception during I/O
*/
public abstract int writeBytes(InputStream in, int length) throws IOException;
public abstract int writeBytes(InputStream in, int length) throws IOException;
/**
* Transfers the content of the specified channel to this buffer
* starting at the current {@code writerIndex} and increases the
* {@code writerIndex} by the number of the transferred bytes.
* If {@code this.writableBytes} is less than {@code length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*
* @param length the maximum number of bytes to transfer
*
* @return the actual number of bytes read in from the specified channel
*
* @throws IndexOutOfBoundsException
* if {@code length} is greater than {@code this.writableBytes}
* @throws IOException
* if the specified channel threw an exception during I/O
*/
@ -2032,14 +2021,14 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* to this buffer starting at the current {@code writerIndex} and increases the
* {@code writerIndex} by the number of the transferred bytes.
* This method does not modify the channel's position.
* If {@code this.writableBytes} is less than {@code length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*
* @param position the file position at which the transfer is to begin
* @param length the maximum number of bytes to transfer
*
* @return the actual number of bytes read in from the specified channel
*
* @throws IndexOutOfBoundsException
* if {@code length} is greater than {@code this.writableBytes}
* @throws IOException
* if the specified channel threw an exception during I/O
*/
@ -2049,11 +2038,10 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Fills this buffer with <tt>NUL (0x00)</tt> starting at the current
* {@code writerIndex} and increases the {@code writerIndex} by the
* specified {@code length}.
* If {@code this.writableBytes} is less than {@code length}, {@link #ensureWritable(int)}
* will be called in an attempt to expand capacity to accommodate.
*
* @param length the number of <tt>NUL</tt>s to write to the buffer
*
* @throws IndexOutOfBoundsException
* if {@code length} is greater than {@code this.writableBytes}
*/
public abstract ByteBuf writeZero(int length);
@ -2061,12 +2049,12 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
* Writes the specified {@link CharSequence} at the current {@code writerIndex} and increases
* the {@code writerIndex} by the written bytes.
* in this buffer.
* If {@code this.writableBytes} is not large enough to write the whole sequence,
* {@link #ensureWritable(int)} will be called in an attempt to expand capacity to accommodate.
*
* @param sequence to write
* @param charset that should be used
* @return the written number of bytes
* @throws IndexOutOfBoundsException
* if {@code this.writableBytes} is not large enough to write the whole sequence
*/
public abstract int writeCharSequence(CharSequence sequence, Charset charset);
@ -2465,4 +2453,12 @@ public abstract class ByteBuf implements ReferenceCounted, Comparable<ByteBuf> {
@Override
public abstract ByteBuf touch(Object hint);
/**
* Used internally by {@link AbstractByteBuf#ensureAccessible()} to try to guard
* against using the buffer after it was released (best-effort).
*/
boolean isAccessible() {
return refCnt() != 0;
}
}

View File

@ -163,7 +163,8 @@ public class ByteBufInputStream extends InputStream implements DataInput {
@Override
public int read() throws IOException {
if (!buffer.isReadable()) {
int available = available();
if (available == 0) {
return -1;
}
return buffer.readByte() & 0xff;
@ -203,7 +204,8 @@ public class ByteBufInputStream extends InputStream implements DataInput {
@Override
public byte readByte() throws IOException {
if (!buffer.isReadable()) {
int available = available();
if (available == 0) {
throw new EOFException();
}
return buffer.readByte();
@ -245,22 +247,26 @@ public class ByteBufInputStream extends InputStream implements DataInput {
@Override
public String readLine() throws IOException {
if (!buffer.isReadable()) {
int available = available();
if (available == 0) {
return null;
}
if (lineBuf != null) {
lineBuf.setLength(0);
}
loop: do {
int c = buffer.readUnsignedByte();
--available;
switch (c) {
case '\n':
break loop;
case '\r':
if (buffer.isReadable() && (char) buffer.getUnsignedByte(buffer.readerIndex()) == '\n') {
if (available > 0 && (char) buffer.getUnsignedByte(buffer.readerIndex()) == '\n') {
buffer.skipBytes(1);
--available;
}
break loop;
@ -270,7 +276,7 @@ public class ByteBufInputStream extends InputStream implements DataInput {
}
lineBuf.append((char) c);
}
} while (buffer.isReadable());
} while (available > 0);
return lineBuf != null && lineBuf.length() > 0 ? lineBuf.toString() : StringUtil.EMPTY_STRING;
}

View File

@ -21,6 +21,7 @@ import io.netty.util.CharsetUtil;
import io.netty.util.Recycler;
import io.netty.util.Recycler.Handle;
import io.netty.util.concurrent.FastThreadLocal;
import io.netty.util.internal.MathUtil;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
import io.netty.util.internal.SystemPropertyUtil;
@ -43,6 +44,7 @@ import java.util.Locale;
import static io.netty.util.internal.MathUtil.isOutOfBounds;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import static io.netty.util.internal.StringUtil.NEWLINE;
import static io.netty.util.internal.StringUtil.isSurrogate;
@ -471,6 +473,14 @@ public final class ByteBufUtil {
return buffer.forEachByteDesc(toIndex, fromIndex - toIndex, new ByteProcessor.IndexOfProcessor(value));
}
private static CharSequence checkCharSequenceBounds(CharSequence seq, int start, int end) {
if (MathUtil.isOutOfBounds(start, end - start, seq.length())) {
throw new IndexOutOfBoundsException("expected: 0 <= start(" + start + ") <= end (" + end
+ ") <= seq.length(" + seq.length() + ')');
}
return seq;
}
/**
* Encode a {@link CharSequence} in <a href="http://en.wikipedia.org/wiki/UTF-8">UTF-8</a> and write
* it to a {@link ByteBuf} allocated with {@code alloc}.
@ -495,7 +505,17 @@ public final class ByteBufUtil {
* This method returns the actual number of bytes written.
*/
public static int writeUtf8(ByteBuf buf, CharSequence seq) {
return reserveAndWriteUtf8(buf, seq, utf8MaxBytes(seq));
int seqLength = seq.length();
return reserveAndWriteUtf8Seq(buf, seq, 0, seqLength, utf8MaxBytes(seqLength));
}
/**
* Equivalent to <code>{@link #writeUtf8(ByteBuf, CharSequence) writeUtf8(buf, seq.subSequence(start, end))}</code>
* but avoids subsequence object allocation.
*/
public static int writeUtf8(ByteBuf buf, CharSequence seq, int start, int end) {
checkCharSequenceBounds(seq, start, end);
return reserveAndWriteUtf8Seq(buf, seq, start, end, utf8MaxBytes(end - start));
}
/**
@ -508,6 +528,21 @@ public final class ByteBufUtil {
* This method returns the actual number of bytes written.
*/
public static int reserveAndWriteUtf8(ByteBuf buf, CharSequence seq, int reserveBytes) {
return reserveAndWriteUtf8Seq(buf, seq, 0, seq.length(), reserveBytes);
}
/**
* Equivalent to <code>{@link #reserveAndWriteUtf8(ByteBuf, CharSequence, int)
* reserveAndWriteUtf8(buf, seq.subSequence(start, end), reserveBytes)}</code> but avoids
* subsequence object allocation if possible.
*
* @return actual number of bytes written
*/
public static int reserveAndWriteUtf8(ByteBuf buf, CharSequence seq, int start, int end, int reserveBytes) {
return reserveAndWriteUtf8Seq(buf, checkCharSequenceBounds(seq, start, end), start, end, reserveBytes);
}
private static int reserveAndWriteUtf8Seq(ByteBuf buf, CharSequence seq, int start, int end, int reserveBytes) {
for (;;) {
if (buf instanceof WrappedCompositeByteBuf) {
// WrappedCompositeByteBuf is a sub-class of AbstractByteBuf so it needs special handling.
@ -515,27 +550,31 @@ public final class ByteBufUtil {
} else if (buf instanceof AbstractByteBuf) {
AbstractByteBuf byteBuf = (AbstractByteBuf) buf;
byteBuf.ensureWritable0(reserveBytes);
int written = writeUtf8(byteBuf, byteBuf.writerIndex, seq, seq.length());
int written = writeUtf8(byteBuf, byteBuf.writerIndex, seq, start, end);
byteBuf.writerIndex += written;
return written;
} else if (buf instanceof WrappedByteBuf) {
// Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path.
buf = buf.unwrap();
} else {
byte[] bytes = seq.toString().getBytes(CharsetUtil.UTF_8);
byte[] bytes = seq.subSequence(start, end).toString().getBytes(CharsetUtil.UTF_8);
buf.writeBytes(bytes);
return bytes.length;
}
}
}
// Fast-Path implementation
static int writeUtf8(AbstractByteBuf buffer, int writerIndex, CharSequence seq, int len) {
return writeUtf8(buffer, writerIndex, seq, 0, len);
}
// Fast-Path implementation
static int writeUtf8(AbstractByteBuf buffer, int writerIndex, CharSequence seq, int start, int end) {
int oldWriterIndex = writerIndex;
// We can use the _set methods as these not need to do any index checks and reference checks.
// This is possible as we called ensureWritable(...) before.
for (int i = 0; i < len; i++) {
for (int i = start; i < end; i++) {
char c = seq.charAt(i);
if (c < 0x80) {
buffer._setByte(writerIndex++, (byte) c);
@ -557,17 +596,8 @@ public final class ByteBufUtil {
buffer._setByte(writerIndex++, WRITE_UTF_UNKNOWN);
break;
}
if (!Character.isLowSurrogate(c2)) {
buffer._setByte(writerIndex++, WRITE_UTF_UNKNOWN);
buffer._setByte(writerIndex++, Character.isHighSurrogate(c2) ? WRITE_UTF_UNKNOWN : c2);
continue;
}
int codePoint = Character.toCodePoint(c, c2);
// See http://www.unicode.org/versions/Unicode7.0.0/ch03.pdf#G2630.
buffer._setByte(writerIndex++, (byte) (0xf0 | (codePoint >> 18)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((codePoint >> 12) & 0x3f)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((codePoint >> 6) & 0x3f)));
buffer._setByte(writerIndex++, (byte) (0x80 | (codePoint & 0x3f)));
// Extra method to allow inlining the rest of writeUtf8 which is the most likely code path.
writerIndex = writeUtf8Surrogate(buffer, writerIndex, c, c2);
} else {
buffer._setByte(writerIndex++, (byte) (0xe0 | (c >> 12)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((c >> 6) & 0x3f)));
@ -577,6 +607,21 @@ public final class ByteBufUtil {
return writerIndex - oldWriterIndex;
}
private static int writeUtf8Surrogate(AbstractByteBuf buffer, int writerIndex, char c, char c2) {
if (!Character.isLowSurrogate(c2)) {
buffer._setByte(writerIndex++, WRITE_UTF_UNKNOWN);
buffer._setByte(writerIndex++, Character.isHighSurrogate(c2) ? WRITE_UTF_UNKNOWN : c2);
return writerIndex;
}
int codePoint = Character.toCodePoint(c, c2);
// See http://www.unicode.org/versions/Unicode7.0.0/ch03.pdf#G2630.
buffer._setByte(writerIndex++, (byte) (0xf0 | (codePoint >> 18)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((codePoint >> 12) & 0x3f)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((codePoint >> 6) & 0x3f)));
buffer._setByte(writerIndex++, (byte) (0x80 | (codePoint & 0x3f)));
return writerIndex;
}
/**
* Returns max bytes length of UTF8 character sequence of the given length.
*/
@ -599,22 +644,35 @@ public final class ByteBufUtil {
* This method is producing the exact length according to {@link #writeUtf8(ByteBuf, CharSequence)}.
*/
public static int utf8Bytes(final CharSequence seq) {
return utf8ByteCount(seq, 0, seq.length());
}
/**
* Equivalent to <code>{@link #utf8Bytes(CharSequence) utf8Bytes(seq.subSequence(start, end))}</code>
* but avoids subsequence object allocation.
* <p>
* This method is producing the exact length according to {@link #writeUtf8(ByteBuf, CharSequence, int, int)}.
*/
public static int utf8Bytes(final CharSequence seq, int start, int end) {
return utf8ByteCount(checkCharSequenceBounds(seq, start, end), start, end);
}
private static int utf8ByteCount(final CharSequence seq, int start, int end) {
if (seq instanceof AsciiString) {
return seq.length();
return end - start;
}
int seqLength = seq.length();
int i = 0;
int i = start;
// ASCII fast path
while (i < seqLength && seq.charAt(i) < 0x80) {
while (i < end && seq.charAt(i) < 0x80) {
++i;
}
// !ASCII is packed in a separate method to let the ASCII case be smaller
return i < seqLength ? i + utf8Bytes(seq, i, seqLength) : i;
return i < end ? (i - start) + utf8BytesNonAscii(seq, i, end) : i - start;
}
private static int utf8Bytes(final CharSequence seq, final int start, final int length) {
private static int utf8BytesNonAscii(final CharSequence seq, final int start, final int end) {
int encodedLength = 0;
for (int i = start; i < length; i++) {
for (int i = start; i < end; i++) {
final char c = seq.charAt(i);
// making it 100% branchless isn't rewarding due to the many bit operations necessary!
if (c < 0x800) {
@ -994,9 +1052,7 @@ public final class ByteBufUtil {
}
private static String hexDump(ByteBuf buffer, int fromIndex, int length) {
if (length < 0) {
throw new IllegalArgumentException("length: " + length);
}
checkPositiveOrZero(length, "length");
if (length == 0) {
return "";
}
@ -1016,9 +1072,7 @@ public final class ByteBufUtil {
}
private static String hexDump(byte[] array, int fromIndex, int length) {
if (length < 0) {
throw new IllegalArgumentException("length: " + length);
}
checkPositiveOrZero(length, "length");
if (length == 0) {
return "";
}
@ -1041,7 +1095,7 @@ public final class ByteBufUtil {
if (length == 0) {
return StringUtil.EMPTY_STRING;
} else {
int rows = length / 16 + (length % 15 == 0? 0 : 1) + 4;
int rows = length / 16 + ((length & 15) == 0? 0 : 1) + 4;
StringBuilder buf = new StringBuilder(rows * 80);
appendPrettyHexDump(buf, buffer, offset, length);
return buf.toString();
@ -1136,7 +1190,7 @@ public final class ByteBufUtil {
static ThreadLocalUnsafeDirectByteBuf newInstance() {
ThreadLocalUnsafeDirectByteBuf buf = RECYCLER.get();
buf.setRefCnt(1);
buf.resetRefCnt();
return buf;
}
@ -1169,7 +1223,7 @@ public final class ByteBufUtil {
static ThreadLocalDirectByteBuf newInstance() {
ThreadLocalDirectByteBuf buf = RECYCLER.get();
buf.setRefCnt(1);
buf.resetRefCnt();
return buf;
}

View File

@ -64,9 +64,9 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
if (alloc == null) {
throw new NullPointerException("alloc");
}
if (maxNumComponents < 2) {
if (maxNumComponents < 1) {
throw new IllegalArgumentException(
"maxNumComponents: " + maxNumComponents + " (expected: >= 2)");
"maxNumComponents: " + maxNumComponents + " (expected: >= 1)");
}
this.alloc = alloc;
this.direct = direct;
@ -96,8 +96,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
this(alloc, direct, maxNumComponents,
buffers instanceof Collection ? ((Collection<ByteBuf>) buffers).size() : 0);
addComponents0(false, 0, buffers);
consolidateIfNeeded();
addComponents(false, 0, buffers);
setIndex(0, capacity());
}
@ -158,8 +157,8 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased use {@link #addComponent(boolean, ByteBuf)}.
* <p>
* {@link ByteBuf#release()} ownership of {@code buffer} is transfered to this {@link CompositeByteBuf}.
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transfered to this
* {@link ByteBuf#release()} ownership of {@code buffer} is transferred to this {@link CompositeByteBuf}.
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transferred to this
* {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponent(ByteBuf buffer) {
@ -172,10 +171,10 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased use {@link #addComponents(boolean, ByteBuf[])}.
* <p>
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transferred to this
* {@link CompositeByteBuf}.
* @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
* ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
* ownership of all {@link ByteBuf} objects is transferred to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(ByteBuf... buffers) {
return addComponents(false, buffers);
@ -187,10 +186,10 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased use {@link #addComponents(boolean, Iterable)}.
* <p>
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transferred to this
* {@link CompositeByteBuf}.
* @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
* ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
* ownership of all {@link ByteBuf} objects is transferred to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(Iterable<ByteBuf> buffers) {
return addComponents(false, buffers);
@ -202,9 +201,9 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased use {@link #addComponent(boolean, int, ByteBuf)}.
* <p>
* {@link ByteBuf#release()} ownership of {@code buffer} is transfered to this {@link CompositeByteBuf}.
* {@link ByteBuf#release()} ownership of {@code buffer} is transferred to this {@link CompositeByteBuf}.
* @param cIndex the index on which the {@link ByteBuf} will be added.
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transfered to this
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transferred to this
* {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponent(int cIndex, ByteBuf buffer) {
@ -215,25 +214,22 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Add the given {@link ByteBuf} and increase the {@code writerIndex} if {@code increaseWriterIndex} is
* {@code true}.
*
* {@link ByteBuf#release()} ownership of {@code buffer} is transfered to this {@link CompositeByteBuf}.
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transfered to this
* {@link ByteBuf#release()} ownership of {@code buffer} is transferred to this {@link CompositeByteBuf}.
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transferred to this
* {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponent(boolean increaseWriterIndex, ByteBuf buffer) {
checkNotNull(buffer, "buffer");
addComponent0(increaseWriterIndex, componentCount, buffer);
consolidateIfNeeded();
return this;
return addComponent(increaseWriterIndex, componentCount, buffer);
}
/**
* Add the given {@link ByteBuf}s and increase the {@code writerIndex} if {@code increaseWriterIndex} is
* {@code true}.
*
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transferred to this
* {@link CompositeByteBuf}.
* @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
* ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
* ownership of all {@link ByteBuf} objects is transferred to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(boolean increaseWriterIndex, ByteBuf... buffers) {
checkNotNull(buffers, "buffers");
@ -246,24 +242,22 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Add the given {@link ByteBuf}s and increase the {@code writerIndex} if {@code increaseWriterIndex} is
* {@code true}.
*
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transferred to this
* {@link CompositeByteBuf}.
* @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
* ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
* ownership of all {@link ByteBuf} objects is transferred to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(boolean increaseWriterIndex, Iterable<ByteBuf> buffers) {
addComponents0(increaseWriterIndex, componentCount, buffers);
consolidateIfNeeded();
return this;
return addComponents(increaseWriterIndex, componentCount, buffers);
}
/**
* Add the given {@link ByteBuf} on the specific index and increase the {@code writerIndex}
* if {@code increaseWriterIndex} is {@code true}.
*
* {@link ByteBuf#release()} ownership of {@code buffer} is transfered to this {@link CompositeByteBuf}.
* {@link ByteBuf#release()} ownership of {@code buffer} is transferred to this {@link CompositeByteBuf}.
* @param cIndex the index on which the {@link ByteBuf} will be added.
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transfered to this
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transferred to this
* {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponent(boolean increaseWriterIndex, int cIndex, ByteBuf buffer) {
@ -294,7 +288,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
c.reposition(components[cIndex - 1].endOffset);
}
if (increaseWriterIndex) {
writerIndex(writerIndex() + readableBytes);
writerIndex += readableBytes;
}
return cIndex;
} finally {
@ -304,14 +298,14 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
}
}
// unwrap if already sliced
@SuppressWarnings("deprecation")
private Component newComponent(ByteBuf buf, int offset) {
if (checkAccessible && buf.refCnt() == 0) {
if (checkAccessible && !buf.isAccessible()) {
throw new IllegalReferenceCountException(0);
}
int srcIndex = buf.readerIndex(), len = buf.readableBytes();
ByteBuf slice = null;
// unwrap if already sliced
if (buf instanceof AbstractUnpooledSlicedByteBuf) {
srcIndex += ((AbstractUnpooledSlicedByteBuf) buf).idx(0);
slice = buf;
@ -330,13 +324,13 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
* <p>
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transferred to this
* {@link CompositeByteBuf}.
* @param cIndex the index on which the {@link ByteBuf} will be added. {@link ByteBuf#release()} ownership of all
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects is transferred to this
* {@link CompositeByteBuf}.
* @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all {@link ByteBuf#release()}
* ownership of all {@link ByteBuf} objects is transfered to this {@link CompositeByteBuf}.
* ownership of all {@link ByteBuf} objects is transferred to this {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(int cIndex, ByteBuf... buffers) {
checkNotNull(buffers, "buffers");
@ -345,20 +339,25 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
return this;
}
private int addComponents0(boolean increaseWriterIndex, final int cIndex, ByteBuf[] buffers, int arrOffset) {
private CompositeByteBuf addComponents0(boolean increaseWriterIndex,
final int cIndex, ByteBuf[] buffers, int arrOffset) {
final int len = buffers.length, count = len - arrOffset;
// only set ci after we've shifted so that finally block logic is always correct
int ci = Integer.MAX_VALUE;
try {
checkComponentIndex(cIndex);
shiftComps(cIndex, count); // will increase componentCount
ci = cIndex; // only set this after we've shifted so that finally block logic is always correct
int nextOffset = cIndex > 0 ? components[cIndex - 1].endOffset : 0;
for (ByteBuf b; arrOffset < len && (b = buffers[arrOffset]) != null; arrOffset++, ci++) {
for (ci = cIndex; arrOffset < len; arrOffset++, ci++) {
ByteBuf b = buffers[arrOffset];
if (b == null) {
break;
}
Component c = newComponent(b, nextOffset);
components[ci] = c;
nextOffset = c.endOffset;
}
return ci;
return this;
} finally {
// ci is now the index following the last successfully added component
if (ci < componentCount) {
@ -372,14 +371,13 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
updateComponentOffsets(ci); // only need to do this here for components after the added ones
}
if (increaseWriterIndex && ci > cIndex && ci <= componentCount) {
writerIndex(writerIndex() + components[ci - 1].endOffset - components[cIndex].offset);
writerIndex += components[ci - 1].endOffset - components[cIndex].offset;
}
}
}
private <T> int addComponents0(boolean increaseWriterIndex, int cIndex,
ByteWrapper<T> wrapper, T[] buffers, int offset) {
checkNotNull(buffers, "buffers");
checkComponentIndex(cIndex);
// No need for consolidation
@ -405,26 +403,92 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
* Be aware that this method does not increase the {@code writerIndex} of the {@link CompositeByteBuf}.
* If you need to have it increased you need to handle it by your own.
* <p>
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects in {@code buffers} is transferred to this
* {@link CompositeByteBuf}.
* @param cIndex the index on which the {@link ByteBuf} will be added.
* @param buffers the {@link ByteBuf}s to add. {@link ByteBuf#release()} ownership of all
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects is transfered to this
* {@link ByteBuf#release()} ownership of all {@link ByteBuf} objects is transferred to this
* {@link CompositeByteBuf}.
*/
public CompositeByteBuf addComponents(int cIndex, Iterable<ByteBuf> buffers) {
addComponents0(false, cIndex, buffers);
consolidateIfNeeded();
return this;
return addComponents(false, cIndex, buffers);
}
/**
* Add the given {@link ByteBuf} and increase the {@code writerIndex} if {@code increaseWriterIndex} is
* {@code true}. If the provided buffer is a {@link CompositeByteBuf} itself, a "shallow copy" of its
* readable components will be performed. Thus the actual number of new components added may vary
* and in particular will be zero if the provided buffer is not readable.
* <p>
* {@link ByteBuf#release()} ownership of {@code buffer} is transferred to this {@link CompositeByteBuf}.
* @param buffer the {@link ByteBuf} to add. {@link ByteBuf#release()} ownership is transferred to this
* {@link CompositeByteBuf}.
*/
public CompositeByteBuf addFlattenedComponents(boolean increaseWriterIndex, ByteBuf buffer) {
checkNotNull(buffer, "buffer");
final int ridx = buffer.readerIndex();
final int widx = buffer.writerIndex();
if (ridx == widx) {
buffer.release();
return this;
}
if (!(buffer instanceof CompositeByteBuf)) {
addComponent0(increaseWriterIndex, componentCount, buffer);
consolidateIfNeeded();
return this;
}
final CompositeByteBuf from = (CompositeByteBuf) buffer;
from.checkIndex(ridx, widx - ridx);
final Component[] fromComponents = from.components;
final int compCountBefore = componentCount;
final int writerIndexBefore = writerIndex;
try {
for (int cidx = from.toComponentIndex0(ridx), newOffset = capacity();; cidx++) {
final Component component = fromComponents[cidx];
final int compOffset = component.offset;
final int fromIdx = Math.max(ridx, compOffset);
final int toIdx = Math.min(widx, component.endOffset);
final int len = toIdx - fromIdx;
if (len > 0) { // skip empty components
// Note that it's safe to just retain the unwrapped buf here, even in the case
// of PooledSlicedByteBufs - those slices will still be properly released by the
// source Component's free() method.
addComp(componentCount, new Component(
component.buf.retain(), component.idx(fromIdx), newOffset, len, null));
}
if (widx == toIdx) {
break;
}
newOffset += len;
}
if (increaseWriterIndex) {
writerIndex = writerIndexBefore + (widx - ridx);
}
consolidateIfNeeded();
buffer.release();
buffer = null;
return this;
} finally {
if (buffer != null) {
// if we did not succeed, attempt to rollback any components that were added
if (increaseWriterIndex) {
writerIndex = writerIndexBefore;
}
for (int cidx = componentCount - 1; cidx >= compCountBefore; cidx--) {
components[cidx].free();
removeComp(cidx);
}
}
}
}
// TODO optimize further, similar to ByteBuf[] version
// (difference here is that we don't know *always* know precise size increase in advance,
// but we do in the most common case that the Iterable is a Collection)
private int addComponents0(boolean increaseIndex, int cIndex, Iterable<ByteBuf> buffers) {
private CompositeByteBuf addComponents(boolean increaseIndex, int cIndex, Iterable<ByteBuf> buffers) {
if (buffers instanceof ByteBuf) {
// If buffers also implements ByteBuf (e.g. CompositeByteBuf), it has to go to addComponent(ByteBuf).
return addComponent0(increaseIndex, cIndex, (ByteBuf) buffers);
return addComponent(increaseIndex, cIndex, (ByteBuf) buffers);
}
checkNotNull(buffers, "buffers");
Iterator<ByteBuf> it = buffers.iterator();
@ -440,12 +504,13 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
cIndex = addComponent0(increaseIndex, cIndex, b) + 1;
cIndex = Math.min(cIndex, componentCount);
}
return cIndex;
} finally {
while (it.hasNext()) {
ReferenceCountUtil.safeRelease(it.next());
}
}
consolidateIfNeeded();
return this;
}
/**
@ -497,7 +562,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
return;
}
int nextIndex = cIndex > 0 ? components[cIndex].endOffset : 0;
int nextIndex = cIndex > 0 ? components[cIndex - 1].endOffset : 0;
for (; cIndex < size; cIndex++) {
Component c = components[cIndex];
c.reposition(nextIndex);
@ -516,7 +581,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
if (lastAccessed == comp) {
lastAccessed = null;
}
comp.freeIfNecessary();
comp.free();
removeComp(cIndex);
if (comp.length() > 0) {
// Only need to call updateComponentOffsets if the length was > 0
@ -547,7 +612,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
if (lastAccessed == c) {
lastAccessed = null;
}
c.freeIfNecessary();
c.free();
}
removeCompRange(cIndex, endIndex);
@ -748,6 +813,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
consolidateIfNeeded();
}
} else if (newCapacity < oldCapacity) {
lastAccessed = null;
int i = size - 1;
for (int bytesToTrim = oldCapacity - newCapacity; i >= 0; i--) {
Component c = components[i];
@ -755,18 +821,23 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
if (bytesToTrim < cLength) {
// Trim the last component
c.endOffset -= bytesToTrim;
c.slice = null;
ByteBuf slice = c.slice;
if (slice != null) {
// We must replace the cached slice with a derived one to ensure that
// it can later be released properly in the case of PooledSlicedByteBuf.
c.slice = slice.slice(0, c.length());
}
break;
}
c.freeIfNecessary();
c.free();
bytesToTrim -= cLength;
}
removeCompRange(i + 1, size);
if (readerIndex() > newCapacity) {
setIndex(newCapacity, newCapacity);
} else if (writerIndex() > newCapacity) {
writerIndex(newCapacity);
setIndex0(newCapacity, newCapacity);
} else if (writerIndex > newCapacity) {
writerIndex = newCapacity;
}
}
return this;
@ -801,7 +872,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
*/
public int toComponentIndex(int offset) {
checkIndex(offset);
return toComponentIndex(offset);
return toComponentIndex0(offset);
}
private int toComponentIndex0(int offset) {
@ -813,6 +884,9 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
}
}
}
if (size <= 2) { // fast-path for 1 and 2 component count
return size == 1 || offset < components[0].endOffset ? 0 : 1;
}
for (int low = 0, high = size; low <= high;) {
int mid = low + high >>> 1;
Component c = components[mid];
@ -1076,7 +1150,8 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
@Override
public CompositeByteBuf setShort(int index, int value) {
super.setShort(index, value);
checkIndex(index, 2);
_setShort(index, value);
return this;
}
@ -1110,7 +1185,8 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
@Override
public CompositeByteBuf setMedium(int index, int value) {
super.setMedium(index, value);
checkIndex(index, 3);
_setMedium(index, value);
return this;
}
@ -1144,7 +1220,8 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
@Override
public CompositeByteBuf setInt(int index, int value) {
super.setInt(index, value);
checkIndex(index, 4);
_setInt(index, value);
return this;
}
@ -1178,7 +1255,8 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
@Override
public CompositeByteBuf setLong(int index, long value) {
super.setLong(index, value);
checkIndex(index, 8);
_setLong(index, value);
return this;
}
@ -1286,7 +1364,6 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
int i = toComponentIndex0(index);
int readBytes = 0;
do {
Component c = components[i];
int localLength = Math.min(length, c.endOffset - index);
@ -1304,15 +1381,11 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
}
}
index += localReadBytes;
length -= localReadBytes;
readBytes += localReadBytes;
if (localReadBytes == localLength) {
index += localLength;
length -= localLength;
readBytes += localLength;
i ++;
} else {
index += localReadBytes;
length -= localReadBytes;
readBytes += localReadBytes;
}
} while (length > 0);
@ -1350,15 +1423,11 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
}
}
index += localReadBytes;
length -= localReadBytes;
readBytes += localReadBytes;
if (localReadBytes == localLength) {
index += localLength;
length -= localLength;
readBytes += localLength;
i ++;
} else {
index += localReadBytes;
length -= localReadBytes;
readBytes += localReadBytes;
}
} while (length > 0);
@ -1396,15 +1465,11 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
}
}
index += localReadBytes;
length -= localReadBytes;
readBytes += localReadBytes;
if (localReadBytes == localLength) {
index += localLength;
length -= localLength;
readBytes += localLength;
i ++;
} else {
index += localReadBytes;
length -= localReadBytes;
readBytes += localReadBytes;
}
} while (length > 0);
@ -1541,8 +1606,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
case 0:
return EMPTY_NIO_BUFFER;
case 1:
Component c = components[0];
return c.buf.internalNioBuffer(c.idx(index), length);
return components[0].internalNioBuffer(index, length);
default:
throw new UnsupportedOperationException();
}
@ -1566,7 +1630,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
ByteBuffer[] buffers = nioBuffers(index, length);
if (buffers.length == 1) {
return buffers[0].duplicate();
return buffers[0];
}
ByteBuffer merged = ByteBuffer.allocate(length).order(order());
@ -1676,7 +1740,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
int writerIndex = writerIndex();
if (readerIndex == writerIndex && writerIndex == capacity()) {
for (int i = 0, size = componentCount; i < size; i++) {
components[i].freeIfNecessary();
components[i].free();
}
lastAccessed = null;
clearComps();
@ -1686,16 +1750,26 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
}
// Remove read components.
int firstComponentId = toComponentIndex0(readerIndex);
for (int i = 0; i < firstComponentId; i ++) {
components[i].freeIfNecessary();
int firstComponentId = 0;
Component c = null;
for (int size = componentCount; firstComponentId < size; firstComponentId++) {
c = components[firstComponentId];
if (c.endOffset > readerIndex) {
break;
}
c.free();
}
if (firstComponentId == 0) {
return this; // Nothing to discard
}
Component la = lastAccessed;
if (la != null && la.endOffset <= readerIndex) {
lastAccessed = null;
}
lastAccessed = null;
removeCompRange(0, firstComponentId);
// Update indexes and markers.
Component first = components[0];
int offset = first.offset;
int offset = c.offset;
updateComponentOffsets(0);
setIndex(readerIndex - offset, writerIndex - offset);
adjustMarkers(offset);
@ -1714,7 +1788,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
int writerIndex = writerIndex();
if (readerIndex == writerIndex && writerIndex == capacity()) {
for (int i = 0, size = componentCount; i < size; i++) {
components[i].freeIfNecessary();
components[i].free();
}
lastAccessed = null;
clearComps();
@ -1723,30 +1797,30 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
return this;
}
// Remove read components.
int firstComponentId = toComponentIndex0(readerIndex);
for (int i = 0; i < firstComponentId; i ++) {
Component c = components[i];
c.freeIfNecessary();
if (lastAccessed == c) {
lastAccessed = null;
int firstComponentId = 0;
Component c = null;
for (int size = componentCount; firstComponentId < size; firstComponentId++) {
c = components[firstComponentId];
if (c.endOffset > readerIndex) {
break;
}
c.free();
}
// Remove or replace the first readable component with a new slice.
Component c = components[firstComponentId];
if (readerIndex == c.endOffset) {
// new slice would be empty, so remove instead
c.freeIfNecessary();
if (lastAccessed == c) {
lastAccessed = null;
}
firstComponentId++;
} else {
c.offset = 0;
c.endOffset -= readerIndex;
c.adjustment += readerIndex;
c.slice = null;
// Replace the first readable component with a new slice.
int trimmedBytes = readerIndex - c.offset;
c.offset = 0;
c.endOffset -= readerIndex;
c.adjustment += readerIndex;
ByteBuf slice = c.slice;
if (slice != null) {
// We must replace the cached slice with a derived one to ensure that
// it can later be released properly in the case of PooledSlicedByteBuf.
c.slice = slice.slice(trimmedBytes, c.length());
}
Component la = lastAccessed;
if (la != null && la.endOffset <= readerIndex) {
lastAccessed = null;
}
removeCompRange(0, firstComponentId);
@ -1803,7 +1877,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
// copy then release
void transferTo(ByteBuf dst) {
dst.writeBytes(buf, idx(offset), length());
freeIfNecessary();
free();
}
ByteBuf slice() {
@ -1814,7 +1888,17 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
return buf.duplicate().setIndex(idx(offset), idx(endOffset));
}
void freeIfNecessary() {
ByteBuffer internalNioBuffer(int index, int length) {
// We must not return the unwrapped buffer's internal buffer
// if it was originally added as a slice - this check of the
// slice field is threadsafe since we only care whether it
// was set upon Component construction, and we aren't
// attempting to access the referenced slice itself
return slice != null ? buf.nioBuffer(idx(index), length)
: buf.internalNioBuffer(idx(index), length);
}
void free() {
// Release the slice if present since it may have a different
// refcount to the unwrapped buf if it is a PooledSlicedByteBuf
ByteBuf buffer = slice;
@ -1823,7 +1907,8 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
} else {
buf.release();
}
// null out in either case since it could be racy
// null out in either case since it could be racy if set lazily (but not
// in the case we care about, where it will have been set in the ctor)
slice = null;
}
}
@ -2065,7 +2150,7 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
@Override
public CompositeByteBuf writeBytes(byte[] src) {
writeBytes(src, 0, src.length);
super.writeBytes(src, 0, src.length);
return this;
}
@ -2129,10 +2214,15 @@ public class CompositeByteBuf extends AbstractReferenceCountedByteBuf implements
// We're not using foreach to avoid creating an iterator.
// see https://github.com/netty/netty/issues/2642
for (int i = 0, size = componentCount; i < size; i++) {
components[i].freeIfNecessary();
components[i].free();
}
}
@Override
boolean isAccessible() {
return !freed;
}
@Override
public ByteBuf unwrap() {
return null;

View File

@ -16,6 +16,8 @@
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.util.ByteProcessor;
import io.netty.util.internal.EmptyArrays;
import io.netty.util.internal.PlatformDependent;
@ -223,9 +225,7 @@ public final class EmptyByteBuf extends ByteBuf {
@Override
public ByteBuf ensureWritable(int minWritableBytes) {
if (minWritableBytes < 0) {
throw new IllegalArgumentException("minWritableBytes: " + minWritableBytes + " (expected: >= 0)");
}
checkPositiveOrZero(minWritableBytes, "minWritableBytes");
if (minWritableBytes != 0) {
throw new IndexOutOfBoundsException();
}
@ -234,9 +234,7 @@ public final class EmptyByteBuf extends ByteBuf {
@Override
public int ensureWritable(int minWritableBytes, boolean force) {
if (minWritableBytes < 0) {
throw new IllegalArgumentException("minWritableBytes: " + minWritableBytes + " (expected: >= 0)");
}
checkPositiveOrZero(minWritableBytes, "minWritableBytes");
if (minWritableBytes == 0) {
return 0;
@ -686,7 +684,7 @@ public final class EmptyByteBuf extends ByteBuf {
@Override
public CharSequence readCharSequence(int length, Charset charset) {
checkLength(length);
return null;
return StringUtil.EMPTY_STRING;
}
@Override
@ -1048,9 +1046,7 @@ public final class EmptyByteBuf extends ByteBuf {
}
private ByteBuf checkIndex(int index, int length) {
if (length < 0) {
throw new IllegalArgumentException("length: " + length);
}
checkPositiveOrZero(length, "length");
if (index != 0 || length != 0) {
throw new IndexOutOfBoundsException();
}
@ -1058,9 +1054,7 @@ public final class EmptyByteBuf extends ByteBuf {
}
private ByteBuf checkLength(int length) {
if (length < 0) {
throw new IllegalArgumentException("length: " + length + " (expected: >= 0)");
}
checkPositiveOrZero(length, "length");
if (length != 0) {
throw new IndexOutOfBoundsException();
}

View File

@ -26,6 +26,7 @@ import java.util.Collections;
import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import static java.lang.Math.max;
abstract class PoolArena<T> implements PoolArenaMetric {
@ -205,7 +206,7 @@ abstract class PoolArena<T> implements PoolArenaMetric {
assert s.doNotDestroy && s.elemSize == normCapacity;
long handle = s.allocate();
assert handle >= 0;
s.chunk.initBufWithSubpage(buf, handle, reqCapacity);
s.chunk.initBufWithSubpage(buf, null, handle, reqCapacity);
incTinySmallAllocation(tiny);
return;
}
@ -242,9 +243,8 @@ abstract class PoolArena<T> implements PoolArenaMetric {
// Add a new chunk.
PoolChunk<T> c = newChunk(pageSize, maxOrder, pageShifts, chunkSize);
long handle = c.allocate(normCapacity);
assert handle > 0;
c.initBuf(buf, handle, reqCapacity);
boolean success = c.allocate(buf, reqCapacity, normCapacity);
assert success;
qInit.add(c);
}
@ -263,7 +263,7 @@ abstract class PoolArena<T> implements PoolArenaMetric {
allocationsHuge.increment();
}
void free(PoolChunk<T> chunk, long handle, int normCapacity, PoolThreadCache cache) {
void free(PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle, int normCapacity, PoolThreadCache cache) {
if (chunk.unpooled) {
int size = chunk.chunkSize();
destroyChunk(chunk);
@ -271,12 +271,12 @@ abstract class PoolArena<T> implements PoolArenaMetric {
deallocationsHuge.increment();
} else {
SizeClass sizeClass = sizeClass(normCapacity);
if (cache != null && cache.add(this, chunk, handle, normCapacity, sizeClass)) {
if (cache != null && cache.add(this, chunk, nioBuffer, handle, normCapacity, sizeClass)) {
// cached so not free it.
return;
}
freeChunk(chunk, handle, sizeClass);
freeChunk(chunk, handle, sizeClass, nioBuffer, false);
}
}
@ -287,23 +287,27 @@ abstract class PoolArena<T> implements PoolArenaMetric {
return isTiny(normCapacity) ? SizeClass.Tiny : SizeClass.Small;
}
void freeChunk(PoolChunk<T> chunk, long handle, SizeClass sizeClass) {
void freeChunk(PoolChunk<T> chunk, long handle, SizeClass sizeClass, ByteBuffer nioBuffer, boolean finalizer) {
final boolean destroyChunk;
synchronized (this) {
switch (sizeClass) {
case Normal:
++deallocationsNormal;
break;
case Small:
++deallocationsSmall;
break;
case Tiny:
++deallocationsTiny;
break;
default:
throw new Error();
// We only call this if freeChunk is not called because of the PoolThreadCache finalizer as otherwise this
// may fail due lazy class-loading in for example tomcat.
if (!finalizer) {
switch (sizeClass) {
case Normal:
++deallocationsNormal;
break;
case Small:
++deallocationsSmall;
break;
case Tiny:
++deallocationsTiny;
break;
default:
throw new Error();
}
}
destroyChunk = !chunk.parent.free(chunk, handle);
destroyChunk = !chunk.parent.free(chunk, handle, nioBuffer);
}
if (destroyChunk) {
// destroyChunk not need to be called while holding the synchronized lock.
@ -331,9 +335,7 @@ abstract class PoolArena<T> implements PoolArenaMetric {
}
int normalizeCapacity(int reqCapacity) {
if (reqCapacity < 0) {
throw new IllegalArgumentException("capacity: " + reqCapacity + " (expected: 0+)");
}
checkPositiveOrZero(reqCapacity, "reqCapacity");
if (reqCapacity >= chunkSize) {
return directMemoryCacheAlignment == 0 ? reqCapacity : alignCapacity(reqCapacity);
@ -387,6 +389,7 @@ abstract class PoolArena<T> implements PoolArenaMetric {
}
PoolChunk<T> oldChunk = buf.chunk;
ByteBuffer oldNioBuffer = buf.tmpNioBuf;
long oldHandle = buf.handle;
T oldMemory = buf.memory;
int oldOffset = buf.offset;
@ -415,7 +418,7 @@ abstract class PoolArena<T> implements PoolArenaMetric {
buf.setIndex(readerIndex, writerIndex);
if (freeOldMemory) {
free(oldChunk, oldHandle, oldMaxLength, buf.cache);
free(oldChunk, oldNioBuffer, oldHandle, oldMaxLength, buf.cache);
}
}
@ -725,11 +728,16 @@ abstract class PoolArena<T> implements PoolArenaMetric {
return true;
}
private int offsetCacheLine(ByteBuffer memory) {
// mark as package-private, only for unit test
int offsetCacheLine(ByteBuffer memory) {
// We can only calculate the offset if Unsafe is present as otherwise directBufferAddress(...) will
// throw an NPE.
return HAS_UNSAFE ?
(int) (PlatformDependent.directBufferAddress(memory) & directMemoryCacheAlignmentMask) : 0;
int remainder = HAS_UNSAFE
? (int) (PlatformDependent.directBufferAddress(memory) & directMemoryCacheAlignmentMask)
: 0;
// offset = alignment - address & (alignment - 1)
return directMemoryCacheAlignment - remainder;
}
@Override

View File

@ -16,6 +16,10 @@
package io.netty.buffer;
import java.nio.ByteBuffer;
import java.util.ArrayDeque;
import java.util.Deque;
/**
* Description of algorithm for PageRun/PoolSubpage allocation from PoolChunk
*
@ -107,7 +111,6 @@ final class PoolChunk<T> implements PoolChunkMetric {
final T memory;
final boolean unpooled;
final int offset;
private final byte[] memoryMap;
private final byte[] depthMap;
private final PoolSubpage<T>[] subpages;
@ -122,6 +125,13 @@ final class PoolChunk<T> implements PoolChunkMetric {
/** Used to mark memory as unusable */
private final byte unusable;
// Use as cache for ByteBuffer created from the memory. These are just duplicates and so are only a container
// around the memory itself. These are often needed for operations within the Pooled*ByteBuf and so
// may produce extra GC, which can be greatly reduced by caching the duplicates.
//
// This may be null if the PoolChunk is unpooled as pooling the ByteBuffer instances does not make any sense here.
private final Deque<ByteBuffer> cachedNioBuffers;
private int freeBytes;
PoolChunkList<T> parent;
@ -163,6 +173,7 @@ final class PoolChunk<T> implements PoolChunkMetric {
}
subpages = newSubpageArray(maxSubpageAllocs);
cachedNioBuffers = new ArrayDeque<ByteBuffer>(8);
}
/** Creates a special chunk that is not pooled. */
@ -182,6 +193,7 @@ final class PoolChunk<T> implements PoolChunkMetric {
chunkSize = size;
log2ChunkSize = log2(chunkSize);
maxSubpageAllocs = 0;
cachedNioBuffers = null;
}
@SuppressWarnings("unchecked")
@ -210,12 +222,20 @@ final class PoolChunk<T> implements PoolChunkMetric {
return 100 - freePercentage;
}
long allocate(int normCapacity) {
boolean allocate(PooledByteBuf<T> buf, int reqCapacity, int normCapacity) {
final long handle;
if ((normCapacity & subpageOverflowMask) != 0) { // >= pageSize
return allocateRun(normCapacity);
handle = allocateRun(normCapacity);
} else {
return allocateSubpage(normCapacity);
handle = allocateSubpage(normCapacity);
}
if (handle < 0) {
return false;
}
ByteBuffer nioBuffer = cachedNioBuffers != null ? cachedNioBuffers.pollLast() : null;
initBuf(buf, nioBuffer, handle, reqCapacity);
return true;
}
/**
@ -310,8 +330,8 @@ final class PoolChunk<T> implements PoolChunkMetric {
}
/**
* Create/ initialize a new PoolSubpage of normCapacity
* Any PoolSubpage created/ initialized here is added to subpage pool in the PoolArena that owns this PoolChunk
* Create / initialize a new PoolSubpage of normCapacity
* Any PoolSubpage created / initialized here is added to subpage pool in the PoolArena that owns this PoolChunk
*
* @param normCapacity normalized capacity
* @return index in memoryMap
@ -320,8 +340,8 @@ final class PoolChunk<T> implements PoolChunkMetric {
// Obtain the head of the PoolSubPage pool that is owned by the PoolArena and synchronize on it.
// This is need as we may add it back and so alter the linked-list structure.
PoolSubpage<T> head = arena.findSubpagePoolHead(normCapacity);
int d = maxOrder; // subpages are only be allocated from pages i.e., leaves
synchronized (head) {
int d = maxOrder; // subpages are only be allocated from pages i.e., leaves
int id = allocateNode(d);
if (id < 0) {
return id;
@ -352,7 +372,7 @@ final class PoolChunk<T> implements PoolChunkMetric {
*
* @param handle handle to free
*/
void free(long handle) {
void free(long handle, ByteBuffer nioBuffer) {
int memoryMapIdx = memoryMapIdx(handle);
int bitmapIdx = bitmapIdx(handle);
@ -372,26 +392,32 @@ final class PoolChunk<T> implements PoolChunkMetric {
freeBytes += runLength(memoryMapIdx);
setValue(memoryMapIdx, depth(memoryMapIdx));
updateParentsFree(memoryMapIdx);
if (nioBuffer != null && cachedNioBuffers != null &&
cachedNioBuffers.size() < PooledByteBufAllocator.DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK) {
cachedNioBuffers.offer(nioBuffer);
}
}
void initBuf(PooledByteBuf<T> buf, long handle, int reqCapacity) {
void initBuf(PooledByteBuf<T> buf, ByteBuffer nioBuffer, long handle, int reqCapacity) {
int memoryMapIdx = memoryMapIdx(handle);
int bitmapIdx = bitmapIdx(handle);
if (bitmapIdx == 0) {
byte val = value(memoryMapIdx);
assert val == unusable : String.valueOf(val);
buf.init(this, handle, runOffset(memoryMapIdx) + offset, reqCapacity, runLength(memoryMapIdx),
arena.parent.threadCache());
buf.init(this, nioBuffer, handle, runOffset(memoryMapIdx) + offset,
reqCapacity, runLength(memoryMapIdx), arena.parent.threadCache());
} else {
initBufWithSubpage(buf, handle, bitmapIdx, reqCapacity);
initBufWithSubpage(buf, nioBuffer, handle, bitmapIdx, reqCapacity);
}
}
void initBufWithSubpage(PooledByteBuf<T> buf, long handle, int reqCapacity) {
initBufWithSubpage(buf, handle, bitmapIdx(handle), reqCapacity);
void initBufWithSubpage(PooledByteBuf<T> buf, ByteBuffer nioBuffer, long handle, int reqCapacity) {
initBufWithSubpage(buf, nioBuffer, handle, bitmapIdx(handle), reqCapacity);
}
private void initBufWithSubpage(PooledByteBuf<T> buf, long handle, int bitmapIdx, int reqCapacity) {
private void initBufWithSubpage(PooledByteBuf<T> buf, ByteBuffer nioBuffer,
long handle, int bitmapIdx, int reqCapacity) {
assert bitmapIdx != 0;
int memoryMapIdx = memoryMapIdx(handle);
@ -401,7 +427,7 @@ final class PoolChunk<T> implements PoolChunkMetric {
assert reqCapacity <= subpage.elemSize;
buf.init(
this, handle,
this, nioBuffer, handle,
runOffset(memoryMapIdx) + (bitmapIdx & 0x3FFFFFFF) * subpage.elemSize + offset,
reqCapacity, subpage.elemSize, arena.parent.threadCache());
}

View File

@ -25,6 +25,8 @@ import java.util.List;
import static java.lang.Math.*;
import java.nio.ByteBuffer;
final class PoolChunkList<T> implements PoolChunkListMetric {
private static final Iterator<PoolChunkMetric> EMPTY_METRICS = Collections.<PoolChunkMetric>emptyList().iterator();
private final PoolArena<T> arena;
@ -75,21 +77,14 @@ final class PoolChunkList<T> implements PoolChunkListMetric {
}
boolean allocate(PooledByteBuf<T> buf, int reqCapacity, int normCapacity) {
if (head == null || normCapacity > maxCapacity) {
if (normCapacity > maxCapacity) {
// Either this PoolChunkList is empty or the requested capacity is larger then the capacity which can
// be handled by the PoolChunks that are contained in this PoolChunkList.
return false;
}
for (PoolChunk<T> cur = head;;) {
long handle = cur.allocate(normCapacity);
if (handle < 0) {
cur = cur.next;
if (cur == null) {
return false;
}
} else {
cur.initBuf(buf, handle, reqCapacity);
for (PoolChunk<T> cur = head; cur != null; cur = cur.next) {
if (cur.allocate(buf, reqCapacity, normCapacity)) {
if (cur.usage() >= maxUsage) {
remove(cur);
nextList.add(cur);
@ -97,10 +92,11 @@ final class PoolChunkList<T> implements PoolChunkListMetric {
return true;
}
}
return false;
}
boolean free(PoolChunk<T> chunk, long handle) {
chunk.free(handle);
boolean free(PoolChunk<T> chunk, long handle, ByteBuffer nioBuffer) {
chunk.free(handle, nioBuffer);
if (chunk.usage() < minUsage) {
remove(chunk);
// Move the PoolChunk down the PoolChunkList linked-list.

View File

@ -204,16 +204,24 @@ final class PoolSubpage<T> implements PoolSubpageMetric {
final int maxNumElems;
final int numAvail;
final int elemSize;
synchronized (chunk.arena) {
if (!this.doNotDestroy) {
doNotDestroy = false;
// Not used for creating the String.
maxNumElems = numAvail = elemSize = -1;
} else {
doNotDestroy = true;
maxNumElems = this.maxNumElems;
numAvail = this.numAvail;
elemSize = this.elemSize;
if (chunk == null) {
// This is the head so there is no need to synchronize at all as these never change.
doNotDestroy = true;
maxNumElems = 0;
numAvail = 0;
elemSize = -1;
} else {
synchronized (chunk.arena) {
if (!this.doNotDestroy) {
doNotDestroy = false;
// Not used for creating the String.
maxNumElems = numAvail = elemSize = -1;
} else {
doNotDestroy = true;
maxNumElems = this.maxNumElems;
numAvail = this.numAvail;
elemSize = this.elemSize;
}
}
}
@ -227,6 +235,11 @@ final class PoolSubpage<T> implements PoolSubpageMetric {
@Override
public int maxNumElements() {
if (chunk == null) {
// It's the head.
return 0;
}
synchronized (chunk.arena) {
return maxNumElems;
}
@ -234,6 +247,11 @@ final class PoolSubpage<T> implements PoolSubpageMetric {
@Override
public int numAvailable() {
if (chunk == null) {
// It's the head.
return 0;
}
synchronized (chunk.arena) {
return numAvail;
}
@ -241,6 +259,11 @@ final class PoolSubpage<T> implements PoolSubpageMetric {
@Override
public int elementSize() {
if (chunk == null) {
// It's the head.
return -1;
}
synchronized (chunk.arena) {
return elemSize;
}

View File

@ -17,6 +17,8 @@
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.buffer.PoolArena.SizeClass;
import io.netty.util.Recycler;
import io.netty.util.Recycler.Handle;
@ -65,10 +67,7 @@ final class PoolThreadCache {
PoolThreadCache(PoolArena<byte[]> heapArena, PoolArena<ByteBuffer> directArena,
int tinyCacheSize, int smallCacheSize, int normalCacheSize,
int maxCachedBufferCapacity, int freeSweepAllocationThreshold) {
if (maxCachedBufferCapacity < 0) {
throw new IllegalArgumentException("maxCachedBufferCapacity: "
+ maxCachedBufferCapacity + " (expected: >= 0)");
}
checkPositiveOrZero(maxCachedBufferCapacity, "maxCachedBufferCapacity");
this.freeSweepAllocationThreshold = freeSweepAllocationThreshold;
this.heapArena = heapArena;
this.directArena = directArena;
@ -200,12 +199,13 @@ final class PoolThreadCache {
* Returns {@code true} if it fit into the cache {@code false} otherwise.
*/
@SuppressWarnings({ "unchecked", "rawtypes" })
boolean add(PoolArena<?> area, PoolChunk chunk, long handle, int normCapacity, SizeClass sizeClass) {
boolean add(PoolArena<?> area, PoolChunk chunk, ByteBuffer nioBuffer,
long handle, int normCapacity, SizeClass sizeClass) {
MemoryRegionCache<?> cache = cache(area, normCapacity, sizeClass);
if (cache == null) {
return false;
}
return cache.add(chunk, handle);
return cache.add(chunk, nioBuffer, handle);
}
private MemoryRegionCache<?> cache(PoolArena<?> area, int normCapacity, SizeClass sizeClass) {
@ -227,23 +227,23 @@ final class PoolThreadCache {
try {
super.finalize();
} finally {
free();
free(true);
}
}
/**
* Should be called if the Thread that uses this cache is about to exist to release resources out of the cache
*/
void free() {
void free(boolean finalizer) {
// As free() may be called either by the finalizer or by FastThreadLocal.onRemoval(...) we need to ensure
// we only call this one time.
if (freed.compareAndSet(false, true)) {
int numFreed = free(tinySubPageDirectCaches) +
free(smallSubPageDirectCaches) +
free(normalDirectCaches) +
free(tinySubPageHeapCaches) +
free(smallSubPageHeapCaches) +
free(normalHeapCaches);
int numFreed = free(tinySubPageDirectCaches, finalizer) +
free(smallSubPageDirectCaches, finalizer) +
free(normalDirectCaches, finalizer) +
free(tinySubPageHeapCaches, finalizer) +
free(smallSubPageHeapCaches, finalizer) +
free(normalHeapCaches, finalizer);
if (numFreed > 0 && logger.isDebugEnabled()) {
logger.debug("Freed {} thread-local buffer(s) from thread: {}", numFreed,
@ -260,23 +260,23 @@ final class PoolThreadCache {
}
}
private static int free(MemoryRegionCache<?>[] caches) {
private static int free(MemoryRegionCache<?>[] caches, boolean finalizer) {
if (caches == null) {
return 0;
}
int numFreed = 0;
for (MemoryRegionCache<?> c: caches) {
numFreed += free(c);
numFreed += free(c, finalizer);
}
return numFreed;
}
private static int free(MemoryRegionCache<?> cache) {
private static int free(MemoryRegionCache<?> cache, boolean finalizer) {
if (cache == null) {
return 0;
}
return cache.free();
return cache.free(finalizer);
}
void trim() {
@ -346,8 +346,8 @@ final class PoolThreadCache {
@Override
protected void initBuf(
PoolChunk<T> chunk, long handle, PooledByteBuf<T> buf, int reqCapacity) {
chunk.initBufWithSubpage(buf, handle, reqCapacity);
PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle, PooledByteBuf<T> buf, int reqCapacity) {
chunk.initBufWithSubpage(buf, nioBuffer, handle, reqCapacity);
}
}
@ -361,8 +361,8 @@ final class PoolThreadCache {
@Override
protected void initBuf(
PoolChunk<T> chunk, long handle, PooledByteBuf<T> buf, int reqCapacity) {
chunk.initBuf(buf, handle, reqCapacity);
PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle, PooledByteBuf<T> buf, int reqCapacity) {
chunk.initBuf(buf, nioBuffer, handle, reqCapacity);
}
}
@ -381,15 +381,15 @@ final class PoolThreadCache {
/**
* Init the {@link PooledByteBuf} using the provided chunk and handle with the capacity restrictions.
*/
protected abstract void initBuf(PoolChunk<T> chunk, long handle,
protected abstract void initBuf(PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle,
PooledByteBuf<T> buf, int reqCapacity);
/**
* Add to cache if not already full.
*/
@SuppressWarnings("unchecked")
public final boolean add(PoolChunk<T> chunk, long handle) {
Entry<T> entry = newEntry(chunk, handle);
public final boolean add(PoolChunk<T> chunk, ByteBuffer nioBuffer, long handle) {
Entry<T> entry = newEntry(chunk, nioBuffer, handle);
boolean queued = queue.offer(entry);
if (!queued) {
// If it was not possible to cache the chunk, immediately recycle the entry
@ -407,7 +407,7 @@ final class PoolThreadCache {
if (entry == null) {
return false;
}
initBuf(entry.chunk, entry.handle, buf, reqCapacity);
initBuf(entry.chunk, entry.nioBuffer, entry.handle, buf, reqCapacity);
entry.recycle();
// allocations is not thread-safe which is fine as this is only called from the same thread all time.
@ -418,16 +418,16 @@ final class PoolThreadCache {
/**
* Clear out this cache and free up all previous cached {@link PoolChunk}s and {@code handle}s.
*/
public final int free() {
return free(Integer.MAX_VALUE);
public final int free(boolean finalizer) {
return free(Integer.MAX_VALUE, finalizer);
}
private int free(int max) {
private int free(int max, boolean finalizer) {
int numFreed = 0;
for (; numFreed < max; numFreed++) {
Entry<T> entry = queue.poll();
if (entry != null) {
freeEntry(entry);
freeEntry(entry, finalizer);
} else {
// all cleared
return numFreed;
@ -445,24 +445,29 @@ final class PoolThreadCache {
// We not even allocated all the number that are
if (free > 0) {
free(free);
free(free, false);
}
}
@SuppressWarnings({ "unchecked", "rawtypes" })
private void freeEntry(Entry entry) {
private void freeEntry(Entry entry, boolean finalizer) {
PoolChunk chunk = entry.chunk;
long handle = entry.handle;
ByteBuffer nioBuffer = entry.nioBuffer;
// recycle now so PoolChunk can be GC'ed.
entry.recycle();
if (!finalizer) {
// recycle now so PoolChunk can be GC'ed. This will only be done if this is not freed because of
// a finalizer.
entry.recycle();
}
chunk.arena.freeChunk(chunk, handle, sizeClass);
chunk.arena.freeChunk(chunk, handle, sizeClass, nioBuffer, finalizer);
}
static final class Entry<T> {
final Handle<Entry<?>> recyclerHandle;
PoolChunk<T> chunk;
ByteBuffer nioBuffer;
long handle = -1;
Entry(Handle<Entry<?>> recyclerHandle) {
@ -471,15 +476,17 @@ final class PoolThreadCache {
void recycle() {
chunk = null;
nioBuffer = null;
handle = -1;
recyclerHandle.recycle(this);
}
}
@SuppressWarnings("rawtypes")
private static Entry newEntry(PoolChunk<?> chunk, long handle) {
private static Entry newEntry(PoolChunk<?> chunk, ByteBuffer nioBuffer, long handle) {
Entry entry = RECYCLER.get();
entry.chunk = chunk;
entry.nioBuffer = nioBuffer;
entry.handle = handle;
return entry;
}

View File

@ -19,8 +19,13 @@ package io.netty.buffer;
import io.netty.util.Recycler;
import io.netty.util.Recycler.Handle;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
@ -33,7 +38,7 @@ abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
protected int length;
int maxLength;
PoolThreadCache cache;
private ByteBuffer tmpNioBuf;
ByteBuffer tmpNioBuf;
private ByteBufAllocator allocator;
@SuppressWarnings("unchecked")
@ -42,27 +47,29 @@ abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
this.recyclerHandle = (Handle<PooledByteBuf<T>>) recyclerHandle;
}
void init(PoolChunk<T> chunk, long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
init0(chunk, handle, offset, length, maxLength, cache);
void init(PoolChunk<T> chunk, ByteBuffer nioBuffer,
long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
init0(chunk, nioBuffer, handle, offset, length, maxLength, cache);
}
void initUnpooled(PoolChunk<T> chunk, int length) {
init0(chunk, 0, chunk.offset, length, length, null);
init0(chunk, null, 0, chunk.offset, length, length, null);
}
private void init0(PoolChunk<T> chunk, long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
private void init0(PoolChunk<T> chunk, ByteBuffer nioBuffer,
long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
assert handle >= 0;
assert chunk != null;
this.chunk = chunk;
memory = chunk.memory;
tmpNioBuf = nioBuffer;
allocator = chunk.arena.parent;
this.cache = cache;
this.handle = handle;
this.offset = offset;
this.length = length;
this.maxLength = maxLength;
tmpNioBuf = null;
}
/**
@ -70,7 +77,7 @@ abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
*/
final void reuse(int maxCapacity) {
maxCapacity(maxCapacity);
setRefCnt(1);
resetRefCnt();
setIndex0(0, 0);
discardMarks();
}
@ -81,35 +88,29 @@ abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
}
@Override
public final ByteBuf capacity(int newCapacity) {
checkNewCapacity(newCapacity);
public int maxFastWritableBytes() {
return Math.min(maxLength, maxCapacity()) - writerIndex;
}
// If the request capacity does not require reallocation, just update the length of the memory.
if (chunk.unpooled) {
if (newCapacity == length) {
return this;
}
} else {
@Override
public final ByteBuf capacity(int newCapacity) {
if (newCapacity == length) {
ensureAccessible();
return this;
}
checkNewCapacity(newCapacity);
if (!chunk.unpooled) {
// If the request capacity does not require reallocation, just update the length of the memory.
if (newCapacity > length) {
if (newCapacity <= maxLength) {
length = newCapacity;
return this;
}
} else if (newCapacity < length) {
if (newCapacity > maxLength >>> 1) {
if (maxLength <= 512) {
if (newCapacity > maxLength - 16) {
length = newCapacity;
setIndex(Math.min(readerIndex(), newCapacity), Math.min(writerIndex(), newCapacity));
return this;
}
} else { // > 512 (i.e. >= 1024)
length = newCapacity;
setIndex(Math.min(readerIndex(), newCapacity), Math.min(writerIndex(), newCapacity));
return this;
}
}
} else {
} else if (newCapacity > maxLength >>> 1 &&
(maxLength > 512 || newCapacity > maxLength - 16)) {
// here newCapacity < length
length = newCapacity;
setIndex(Math.min(readerIndex(), newCapacity), Math.min(writerIndex(), newCapacity));
return this;
}
}
@ -166,8 +167,8 @@ abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
final long handle = this.handle;
this.handle = -1;
memory = null;
chunk.arena.free(chunk, tmpNioBuf, handle, maxLength, cache);
tmpNioBuf = null;
chunk.arena.free(chunk, handle, maxLength, cache);
chunk = null;
recycle();
}
@ -180,4 +181,81 @@ abstract class PooledByteBuf<T> extends AbstractReferenceCountedByteBuf {
protected final int idx(int index) {
return offset + index;
}
final ByteBuffer _internalNioBuffer(int index, int length, boolean duplicate) {
index = idx(index);
ByteBuffer buffer = duplicate ? newInternalNioBuffer(memory) : internalNioBuffer();
buffer.limit(index + length).position(index);
return buffer;
}
ByteBuffer duplicateInternalNioBuffer(int index, int length) {
checkIndex(index, length);
return _internalNioBuffer(index, length, true);
}
@Override
public final ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
return _internalNioBuffer(index, length, false);
}
@Override
public final int nioBufferCount() {
return 1;
}
@Override
public final ByteBuffer nioBuffer(int index, int length) {
return duplicateInternalNioBuffer(index, length).slice();
}
@Override
public final ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public final int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
return out.write(duplicateInternalNioBuffer(index, length));
}
@Override
public final int readBytes(GatheringByteChannel out, int length) throws IOException {
checkReadableBytes(length);
int readBytes = out.write(_internalNioBuffer(readerIndex, length, false));
readerIndex += readBytes;
return readBytes;
}
@Override
public final int getBytes(int index, FileChannel out, long position, int length) throws IOException {
return out.write(duplicateInternalNioBuffer(index, length), position);
}
@Override
public final int readBytes(FileChannel out, long position, int length) throws IOException {
checkReadableBytes(length);
int readBytes = out.write(_internalNioBuffer(readerIndex, length, false), position);
readerIndex += readBytes;
return readBytes;
}
@Override
public final int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
try {
return in.read(internalNioBuffer(index, length));
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public final int setBytes(int index, FileChannel in, long position, int length) throws IOException {
try {
return in.read(internalNioBuffer(index, length), position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
}

View File

@ -16,12 +16,16 @@
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.util.NettyRuntime;
import io.netty.util.concurrent.EventExecutor;
import io.netty.util.concurrent.FastThreadLocal;
import io.netty.util.concurrent.FastThreadLocalThread;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
import io.netty.util.internal.SystemPropertyUtil;
import io.netty.util.internal.ThreadExecutorMap;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
@ -29,6 +33,7 @@ import java.nio.ByteBuffer;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.concurrent.TimeUnit;
public class PooledByteBufAllocator extends AbstractByteBufAllocator implements ByteBufAllocatorMetricProvider {
@ -43,12 +48,21 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements
private static final int DEFAULT_NORMAL_CACHE_SIZE;
private static final int DEFAULT_MAX_CACHED_BUFFER_CAPACITY;
private static final int DEFAULT_CACHE_TRIM_INTERVAL;
private static final long DEFAULT_CACHE_TRIM_INTERVAL_MILLIS;
private static final boolean DEFAULT_USE_CACHE_FOR_ALL_THREADS;
private static final int DEFAULT_DIRECT_MEMORY_CACHE_ALIGNMENT;
static final int DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK;
private static final int MIN_PAGE_SIZE = 4096;
private static final int MAX_CHUNK_SIZE = (int) (((long) Integer.MAX_VALUE + 1) / 2);
private final Runnable trimTask = new Runnable() {
@Override
public void run() {
PooledByteBufAllocator.this.trimCurrentThreadCache();
}
};
static {
int defaultPageSize = SystemPropertyUtil.getInt("io.netty.allocator.pageSize", 8192);
Throwable pageSizeFallbackCause = null;
@ -110,12 +124,20 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements
DEFAULT_CACHE_TRIM_INTERVAL = SystemPropertyUtil.getInt(
"io.netty.allocator.cacheTrimInterval", 8192);
DEFAULT_CACHE_TRIM_INTERVAL_MILLIS = SystemPropertyUtil.getLong(
"io.netty.allocation.cacheTrimIntervalMillis", 0);
DEFAULT_USE_CACHE_FOR_ALL_THREADS = SystemPropertyUtil.getBoolean(
"io.netty.allocator.useCacheForAllThreads", true);
DEFAULT_DIRECT_MEMORY_CACHE_ALIGNMENT = SystemPropertyUtil.getInt(
"io.netty.allocator.directMemoryCacheAlignment", 0);
// Use 1023 by default as we use an ArrayDeque as backing storage which will then allocate an internal array
// of 1024 elements. Otherwise we would allocate 2048 and only use 1024 which is wasteful.
DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK = SystemPropertyUtil.getInt(
"io.netty.allocator.maxCachedByteBuffersPerChunk", 1023);
if (logger.isDebugEnabled()) {
logger.debug("-Dio.netty.allocator.numHeapArenas: {}", DEFAULT_NUM_HEAP_ARENA);
logger.debug("-Dio.netty.allocator.numDirectArenas: {}", DEFAULT_NUM_DIRECT_ARENA);
@ -135,7 +157,10 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements
logger.debug("-Dio.netty.allocator.normalCacheSize: {}", DEFAULT_NORMAL_CACHE_SIZE);
logger.debug("-Dio.netty.allocator.maxCachedBufferCapacity: {}", DEFAULT_MAX_CACHED_BUFFER_CAPACITY);
logger.debug("-Dio.netty.allocator.cacheTrimInterval: {}", DEFAULT_CACHE_TRIM_INTERVAL);
logger.debug("-Dio.netty.allocator.cacheTrimIntervalMillis: {}", DEFAULT_CACHE_TRIM_INTERVAL_MILLIS);
logger.debug("-Dio.netty.allocator.useCacheForAllThreads: {}", DEFAULT_USE_CACHE_FOR_ALL_THREADS);
logger.debug("-Dio.netty.allocator.maxCachedByteBuffersPerChunk: {}",
DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK);
}
}
@ -207,17 +232,10 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements
this.normalCacheSize = normalCacheSize;
chunkSize = validateAndCalculateChunkSize(pageSize, maxOrder);
if (nHeapArena < 0) {
throw new IllegalArgumentException("nHeapArena: " + nHeapArena + " (expected: >= 0)");
}
if (nDirectArena < 0) {
throw new IllegalArgumentException("nDirectArea: " + nDirectArena + " (expected: >= 0)");
}
checkPositiveOrZero(nHeapArena, "nHeapArena");
checkPositiveOrZero(nDirectArena, "nDirectArena");
if (directMemoryCacheAlignment < 0) {
throw new IllegalArgumentException("directMemoryCacheAlignment: "
+ directMemoryCacheAlignment + " (expected: >= 0)");
}
checkPositiveOrZero(directMemoryCacheAlignment, "directMemoryCacheAlignment");
if (directMemoryCacheAlignment > 0 && !isDirectMemoryCacheAlignmentSupported()) {
throw new IllegalArgumentException("directMemoryCacheAlignment is not supported");
}
@ -435,11 +453,20 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements
final PoolArena<byte[]> heapArena = leastUsedArena(heapArenas);
final PoolArena<ByteBuffer> directArena = leastUsedArena(directArenas);
Thread current = Thread.currentThread();
final Thread current = Thread.currentThread();
if (useCacheForAllThreads || current instanceof FastThreadLocalThread) {
return new PoolThreadCache(
final PoolThreadCache cache = new PoolThreadCache(
heapArena, directArena, tinyCacheSize, smallCacheSize, normalCacheSize,
DEFAULT_MAX_CACHED_BUFFER_CAPACITY, DEFAULT_CACHE_TRIM_INTERVAL);
if (DEFAULT_CACHE_TRIM_INTERVAL_MILLIS > 0) {
final EventExecutor executor = ThreadExecutorMap.currentExecutor();
if (executor != null) {
executor.scheduleAtFixedRate(trimTask, DEFAULT_CACHE_TRIM_INTERVAL_MILLIS,
DEFAULT_CACHE_TRIM_INTERVAL_MILLIS, TimeUnit.MILLISECONDS);
}
}
return cache;
}
// No caching so just use 0 as sizes.
return new PoolThreadCache(heapArena, directArena, 0, 0, 0, 0, 0);
@ -447,7 +474,7 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements
@Override
protected void onRemoval(PoolThreadCache threadCache) {
threadCache.free();
threadCache.free(false);
}
private <T> PoolArena<T> leastUsedArena(PoolArena<T>[] arenas) {
@ -600,6 +627,21 @@ public class PooledByteBufAllocator extends AbstractByteBufAllocator implements
return cache;
}
/**
* Trim thread local cache for the current {@link Thread}, which will give back any cached memory that was not
* allocated frequently since the last trim operation.
*
* Returns {@code true} if a cache for the current {@link Thread} exists and so was trimmed, false otherwise.
*/
public boolean trimCurrentThreadCache() {
PoolThreadCache cache = threadCache.getIfExists();
if (cache != null) {
cache.trim();
return true;
}
return false;
}
/**
* Returns the status of the allocator (which contains all metrics) as string. Be aware this may be expensive
* and so should not called too frequently.

View File

@ -22,10 +22,6 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
final class PooledDirectByteBuf extends PooledByteBuf<ByteBuffer> {
@ -126,55 +122,30 @@ final class PooledDirectByteBuf extends PooledByteBuf<ByteBuffer> {
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
getBytes(index, dst, dstIndex, length, false);
return this;
}
private void getBytes(int index, byte[] dst, int dstIndex, int length, boolean internal) {
checkDstIndex(index, length, dstIndex, dst.length);
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = memory.duplicate();
}
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
tmpBuf.get(dst, dstIndex, length);
_internalNioBuffer(index, length, true).get(dst, dstIndex, length);
return this;
}
@Override
public ByteBuf readBytes(byte[] dst, int dstIndex, int length) {
checkReadableBytes(length);
getBytes(readerIndex, dst, dstIndex, length, true);
checkDstIndex(length, dstIndex, dst.length);
_internalNioBuffer(readerIndex, length, false).get(dst, dstIndex, length);
readerIndex += length;
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
getBytes(index, dst, false);
dst.put(duplicateInternalNioBuffer(index, dst.remaining()));
return this;
}
private void getBytes(int index, ByteBuffer dst, boolean internal) {
checkIndex(index, dst.remaining());
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = memory.duplicate();
}
index = idx(index);
tmpBuf.clear().position(index).limit(index + dst.remaining());
dst.put(tmpBuf);
}
@Override
public ByteBuf readBytes(ByteBuffer dst) {
int length = dst.remaining();
checkReadableBytes(length);
getBytes(readerIndex, dst, true);
dst.put(_internalNioBuffer(readerIndex, length, false));
readerIndex += length;
return this;
}
@ -201,61 +172,6 @@ final class PooledDirectByteBuf extends PooledByteBuf<ByteBuffer> {
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
return getBytes(index, out, length, false);
}
private int getBytes(int index, GatheringByteChannel out, int length, boolean internal) throws IOException {
checkIndex(index, length);
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = memory.duplicate();
}
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
return getBytes(index, out, position, length, false);
}
private int getBytes(int index, FileChannel out, long position, int length, boolean internal) throws IOException {
checkIndex(index, length);
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf = internal ? internalNioBuffer() : memory.duplicate();
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf, position);
}
@Override
public int readBytes(GatheringByteChannel out, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public int readBytes(FileChannel out, long position, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, position, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
protected void _setByte(int index, int value) {
memory.put(idx(index), (byte) value);
@ -327,23 +243,21 @@ final class PooledDirectByteBuf extends PooledByteBuf<ByteBuffer> {
@Override
public ByteBuf setBytes(int index, byte[] src, int srcIndex, int length) {
checkSrcIndex(index, length, srcIndex, src.length);
ByteBuffer tmpBuf = internalNioBuffer();
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
tmpBuf.put(src, srcIndex, length);
_internalNioBuffer(index, length, false).put(src, srcIndex, length);
return this;
}
@Override
public ByteBuf setBytes(int index, ByteBuffer src) {
checkIndex(index, src.remaining());
int length = src.remaining();
checkIndex(index, length);
ByteBuffer tmpBuf = internalNioBuffer();
if (src == tmpBuf) {
src = src.duplicate();
}
index = idx(index);
tmpBuf.clear().position(index).limit(index + src.remaining());
tmpBuf.clear().position(index).limit(index + length);
tmpBuf.put(src);
return this;
}
@ -362,62 +276,11 @@ final class PooledDirectByteBuf extends PooledByteBuf<ByteBuffer> {
return readBytes;
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
checkIndex(index, length);
ByteBuffer tmpBuf = internalNioBuffer();
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
checkIndex(index, length);
ByteBuffer tmpBuf = internalNioBuffer();
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf, position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex(index, length);
ByteBuf copy = alloc().directBuffer(length, maxCapacity());
copy.writeBytes(this, index, length);
return copy;
}
@Override
public int nioBufferCount() {
return 1;
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex(index, length);
index = idx(index);
return ((ByteBuffer) memory.duplicate().position(index).limit(index + length)).slice();
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
index = idx(index);
return (ByteBuffer) internalNioBuffer().clear().position(index).limit(index + length);
return copy.writeBytes(this, index, length);
}
@Override

View File

@ -21,10 +21,6 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
class PooledHeapByteBuf extends PooledByteBuf<byte[]> {
@ -117,8 +113,9 @@ class PooledHeapByteBuf extends PooledByteBuf<byte[]> {
@Override
public final ByteBuf getBytes(int index, ByteBuffer dst) {
checkIndex(index, dst.remaining());
dst.put(memory, idx(index), dst.remaining());
int length = dst.remaining();
checkIndex(index, length);
dst.put(memory, idx(index), length);
return this;
}
@ -129,51 +126,6 @@ class PooledHeapByteBuf extends PooledByteBuf<byte[]> {
return this;
}
@Override
public final int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
return getBytes(index, out, length, false);
}
private int getBytes(int index, GatheringByteChannel out, int length, boolean internal) throws IOException {
checkIndex(index, length);
index = idx(index);
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = ByteBuffer.wrap(memory);
}
return out.write((ByteBuffer) tmpBuf.clear().position(index).limit(index + length));
}
@Override
public final int getBytes(int index, FileChannel out, long position, int length) throws IOException {
return getBytes(index, out, position, length, false);
}
private int getBytes(int index, FileChannel out, long position, int length, boolean internal) throws IOException {
checkIndex(index, length);
index = idx(index);
ByteBuffer tmpBuf = internal ? internalNioBuffer() : ByteBuffer.wrap(memory);
return out.write((ByteBuffer) tmpBuf.clear().position(index).limit(index + length), position);
}
@Override
public final int readBytes(GatheringByteChannel out, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public final int readBytes(FileChannel out, long position, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, position, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
protected void _setByte(int index, int value) {
HeapByteBufUtil.setByte(memory, idx(index), value);
@ -253,59 +205,17 @@ class PooledHeapByteBuf extends PooledByteBuf<byte[]> {
return in.read(memory, idx(index), length);
}
@Override
public final int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
checkIndex(index, length);
index = idx(index);
try {
return in.read((ByteBuffer) internalNioBuffer().clear().position(index).limit(index + length));
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public final int setBytes(int index, FileChannel in, long position, int length) throws IOException {
checkIndex(index, length);
index = idx(index);
try {
return in.read((ByteBuffer) internalNioBuffer().clear().position(index).limit(index + length), position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public final ByteBuf copy(int index, int length) {
checkIndex(index, length);
ByteBuf copy = alloc().heapBuffer(length, maxCapacity());
copy.writeBytes(memory, idx(index), length);
return copy;
return copy.writeBytes(memory, idx(index), length);
}
@Override
public final int nioBufferCount() {
return 1;
}
@Override
public final ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public final ByteBuffer nioBuffer(int index, int length) {
final ByteBuffer duplicateInternalNioBuffer(int index, int length) {
checkIndex(index, length);
index = idx(index);
ByteBuffer buf = ByteBuffer.wrap(memory, index, length);
return buf.slice();
}
@Override
public final ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
index = idx(index);
return (ByteBuffer) internalNioBuffer().clear().position(index).limit(index + length);
return ByteBuffer.wrap(memory, idx(index), length).slice();
}
@Override

View File

@ -23,10 +23,6 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
final class PooledUnsafeDirectByteBuf extends PooledByteBuf<ByteBuffer> {
private static final Recycler<PooledUnsafeDirectByteBuf> RECYCLER = new Recycler<PooledUnsafeDirectByteBuf>() {
@ -49,9 +45,9 @@ final class PooledUnsafeDirectByteBuf extends PooledByteBuf<ByteBuffer> {
}
@Override
void init(PoolChunk<ByteBuffer> chunk, long handle, int offset, int length, int maxLength,
PoolThreadCache cache) {
super.init(chunk, handle, offset, length, maxLength, cache);
void init(PoolChunk<ByteBuffer> chunk, ByteBuffer nioBuffer,
long handle, int offset, int length, int maxLength, PoolThreadCache cache) {
super.init(chunk, nioBuffer, handle, offset, length, maxLength, cache);
initMemoryAddress();
}
@ -138,78 +134,12 @@ final class PooledUnsafeDirectByteBuf extends PooledByteBuf<ByteBuffer> {
return this;
}
@Override
public ByteBuf readBytes(ByteBuffer dst) {
int length = dst.remaining();
checkReadableBytes(length);
getBytes(readerIndex, dst);
readerIndex += length;
return this;
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
UnsafeByteBufUtil.getBytes(this, addr(index), index, out, length);
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
return getBytes(index, out, length, false);
}
private int getBytes(int index, GatheringByteChannel out, int length, boolean internal) throws IOException {
checkIndex(index, length);
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = memory.duplicate();
}
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
return getBytes(index, out, position, length, false);
}
private int getBytes(int index, FileChannel out, long position, int length, boolean internal) throws IOException {
checkIndex(index, length);
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf = internal ? internalNioBuffer() : memory.duplicate();
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf, position);
}
@Override
public int readBytes(GatheringByteChannel out, int length)
throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public int readBytes(FileChannel out, long position, int length)
throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, position, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
protected void _setByte(int index, int value) {
UnsafeByteBufUtil.setByte(addr(index), (byte) value);
@ -278,61 +208,11 @@ final class PooledUnsafeDirectByteBuf extends PooledByteBuf<ByteBuffer> {
return UnsafeByteBufUtil.setBytes(this, addr(index), index, in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
checkIndex(index, length);
ByteBuffer tmpBuf = internalNioBuffer();
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
checkIndex(index, length);
ByteBuffer tmpBuf = internalNioBuffer();
index = idx(index);
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf, position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public ByteBuf copy(int index, int length) {
return UnsafeByteBufUtil.copy(this, addr(index), index, length);
}
@Override
public int nioBufferCount() {
return 1;
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex(index, length);
index = idx(index);
return ((ByteBuffer) memory.duplicate().position(index).limit(index + length)).slice();
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
index = idx(index);
return (ByteBuffer) internalNioBuffer().clear().position(index).limit(index + length);
}
@Override
public boolean hasArray() {
return false;

View File

@ -195,11 +195,6 @@ class ReadOnlyByteBufferBuf extends AbstractReferenceCountedByteBuf {
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
checkDstIndex(index, length, dstIndex, dst.length);
if (dstIndex < 0 || dstIndex > dst.length - length) {
throw new IndexOutOfBoundsException(String.format(
"dstIndex: %d, length: %d (expected: range(0, %d))", dstIndex, length, dst.length));
}
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
tmpBuf.get(dst, dstIndex, length);
@ -208,14 +203,10 @@ class ReadOnlyByteBufferBuf extends AbstractReferenceCountedByteBuf {
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
checkIndex(index);
if (dst == null) {
throw new NullPointerException("dst");
}
checkIndex(index, dst.remaining());
int bytesToCopy = Math.min(capacity() - index, dst.remaining());
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + bytesToCopy);
tmpBuf.clear().position(index).limit(index + dst.remaining());
dst.put(tmpBuf);
return this;
}
@ -453,6 +444,7 @@ class ReadOnlyByteBufferBuf extends AbstractReferenceCountedByteBuf {
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex(index, length);
return (ByteBuffer) buffer.duplicate().position(index).limit(index + length);
}

View File

@ -96,20 +96,6 @@ final class ReadOnlyUnsafeDirectByteBuf extends ReadOnlyByteBufferBuf {
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
checkIndex(index);
if (dst == null) {
throw new NullPointerException("dst");
}
int bytesToCopy = Math.min(capacity() - index, dst.remaining());
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + bytesToCopy);
dst.put(tmpBuf);
return this;
}
@Override
public ByteBuf copy(int index, int length) {
checkIndex(index, length);

View File

@ -151,6 +151,11 @@ public class SwappedByteBuf extends ByteBuf {
return buf.maxWritableBytes();
}
@Override
public int maxFastWritableBytes() {
return buf.maxFastWritableBytes();
}
@Override
public boolean isReadable() {
return buf.isReadable();
@ -997,6 +1002,11 @@ public class SwappedByteBuf extends ByteBuf {
return buf.refCnt();
}
@Override
final boolean isAccessible() {
return buf.isAccessible();
}
@Override
public ByteBuf retain() {
buf.retain();

View File

@ -219,7 +219,7 @@ public final class Unpooled {
* Creates a new buffer which wraps the specified buffer's readable bytes.
* A modification on the specified buffer's content will be visible to the
* returned buffer.
* @param buffer The buffer to wrap. Reference count ownership of this variable is transfered to this method.
* @param buffer The buffer to wrap. Reference count ownership of this variable is transferred to this method.
* @return The readable portion of the {@code buffer}, or an empty buffer if there is no readable portion.
* The caller is responsible for releasing this buffer.
*/
@ -245,7 +245,7 @@ public final class Unpooled {
* Creates a new big-endian composite buffer which wraps the readable bytes of the
* specified buffers without copying them. A modification on the content
* of the specified buffers will be visible to the returned buffer.
* @param buffers The buffers to wrap. Reference count ownership of all variables is transfered to this method.
* @param buffers The buffers to wrap. Reference count ownership of all variables is transferred to this method.
* @return The readable portion of the {@code buffers}. The caller is responsible for releasing this buffer.
*/
public static ByteBuf wrappedBuffer(ByteBuf... buffers) {
@ -300,7 +300,7 @@ public final class Unpooled {
* of the specified buffers will be visible to the returned buffer.
* @param maxNumComponents Advisement as to how many independent buffers are allowed to exist before
* consolidation occurs.
* @param buffers The buffers to wrap. Reference count ownership of all variables is transfered to this method.
* @param buffers The buffers to wrap. Reference count ownership of all variables is transferred to this method.
* @return The readable portion of the {@code buffers}. The caller is responsible for releasing this buffer.
*/
public static ByteBuf wrappedBuffer(int maxNumComponents, ByteBuf... buffers) {

View File

@ -15,6 +15,8 @@
*/
package io.netty.buffer;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.util.internal.PlatformDependent;
import java.io.IOException;
@ -36,7 +38,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
private final ByteBufAllocator alloc;
private ByteBuffer buffer;
ByteBuffer buffer; // accessed by UnpooledUnsafeNoCleanerDirectByteBuf.reallocateDirect()
private ByteBuffer tmpNioBuf;
private int capacity;
private boolean doNotFree;
@ -52,19 +54,15 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
if (alloc == null) {
throw new NullPointerException("alloc");
}
if (initialCapacity < 0) {
throw new IllegalArgumentException("initialCapacity: " + initialCapacity);
}
if (maxCapacity < 0) {
throw new IllegalArgumentException("maxCapacity: " + maxCapacity);
}
checkPositiveOrZero(initialCapacity, "initialCapacity");
checkPositiveOrZero(maxCapacity, "maxCapacity");
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialCapacity, maxCapacity));
}
this.alloc = alloc;
setByteBuffer(allocateDirect(initialCapacity));
setByteBuffer(allocateDirect(initialCapacity), false);
}
/**
@ -73,6 +71,11 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
* @param maxCapacity the maximum capacity of the underlying direct buffer
*/
protected UnpooledDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer, int maxCapacity) {
this(alloc, initialBuffer, maxCapacity, false, true);
}
UnpooledDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer,
int maxCapacity, boolean doFree, boolean slice) {
super(maxCapacity);
if (alloc == null) {
throw new NullPointerException("alloc");
@ -94,8 +97,8 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
}
this.alloc = alloc;
doNotFree = true;
setByteBuffer(initialBuffer.slice().order(ByteOrder.BIG_ENDIAN));
doNotFree = !doFree;
setByteBuffer((slice ? initialBuffer.slice() : initialBuffer).order(ByteOrder.BIG_ENDIAN), false);
writerIndex(initialCapacity);
}
@ -113,13 +116,15 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
PlatformDependent.freeDirectBuffer(buffer);
}
private void setByteBuffer(ByteBuffer buffer) {
ByteBuffer oldBuffer = this.buffer;
if (oldBuffer != null) {
if (doNotFree) {
doNotFree = false;
} else {
freeDirect(oldBuffer);
void setByteBuffer(ByteBuffer buffer, boolean tryFree) {
if (tryFree) {
ByteBuffer oldBuffer = this.buffer;
if (oldBuffer != null) {
if (doNotFree) {
doNotFree = false;
} else {
freeDirect(oldBuffer);
}
}
}
@ -153,7 +158,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
newBuffer.position(0).limit(oldBuffer.capacity());
newBuffer.put(oldBuffer);
newBuffer.clear();
setByteBuffer(newBuffer);
setByteBuffer(newBuffer, true);
} else if (newCapacity < oldCapacity) {
ByteBuffer oldBuffer = buffer;
ByteBuffer newBuffer = allocateDirect(newCapacity);
@ -168,7 +173,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
} else {
setIndex(newCapacity, newCapacity);
}
setByteBuffer(newBuffer);
setByteBuffer(newBuffer, true);
}
return this;
}
@ -310,7 +315,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
return this;
}
private void getBytes(int index, byte[] dst, int dstIndex, int length, boolean internal) {
void getBytes(int index, byte[] dst, int dstIndex, int length, boolean internal) {
checkDstIndex(index, length, dstIndex, dst.length);
ByteBuffer tmpBuf;
@ -337,7 +342,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
return this;
}
private void getBytes(int index, ByteBuffer dst, boolean internal) {
void getBytes(int index, ByteBuffer dst, boolean internal) {
checkIndex(index, dst.remaining());
ByteBuffer tmpBuf;
@ -486,7 +491,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
return this;
}
private void getBytes(int index, OutputStream out, int length, boolean internal) throws IOException {
void getBytes(int index, OutputStream out, int length, boolean internal) throws IOException {
ensureAccessible();
if (length == 0) {
return;
@ -579,7 +584,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpNioBuf);
return in.read(tmpBuf);
} catch (ClosedChannelException ignored) {
return -1;
}
@ -591,7 +596,7 @@ public class UnpooledDirectByteBuf extends AbstractReferenceCountedByteBuf {
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpNioBuf, position);
return in.read(tmpBuf, position);
} catch (ClosedChannelException ignored) {
return -1;
}

View File

@ -194,7 +194,7 @@ public class UnpooledHeapByteBuf extends AbstractReferenceCountedByteBuf {
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
checkIndex(index, dst.remaining());
ensureAccessible();
dst.put(array, index, dst.remaining());
return this;
}

View File

@ -21,25 +21,14 @@ import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.channels.ClosedChannelException;
import java.nio.channels.FileChannel;
import java.nio.channels.GatheringByteChannel;
import java.nio.channels.ScatteringByteChannel;
/**
* A NIO {@link ByteBuffer} based buffer. It is recommended to use
* {@link UnpooledByteBufAllocator#directBuffer(int, int)}, {@link Unpooled#directBuffer(int)} and
* {@link Unpooled#wrappedBuffer(ByteBuffer)} instead of calling the constructor explicitly.}
*/
public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf {
public class UnpooledUnsafeDirectByteBuf extends UnpooledDirectByteBuf {
private final ByteBufAllocator alloc;
private ByteBuffer tmpNioBuf;
private int capacity;
private boolean doNotFree;
ByteBuffer buffer;
long memoryAddress;
/**
@ -49,23 +38,7 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
* @param maxCapacity the maximum capacity of the underlying direct buffer
*/
public UnpooledUnsafeDirectByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(maxCapacity);
if (alloc == null) {
throw new NullPointerException("alloc");
}
if (initialCapacity < 0) {
throw new IllegalArgumentException("initialCapacity: " + initialCapacity);
}
if (maxCapacity < 0) {
throw new IllegalArgumentException("maxCapacity: " + maxCapacity);
}
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialCapacity, maxCapacity));
}
this.alloc = alloc;
setByteBuffer(allocateDirect(initialCapacity), false);
super(alloc, initialCapacity, maxCapacity);
}
/**
@ -83,135 +56,17 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
// sun/misc/Unsafe.java#l1250
//
// We also call slice() explicitly here to preserve behaviour with previous netty releases.
this(alloc, initialBuffer.slice(), maxCapacity, false);
super(alloc, initialBuffer, maxCapacity, /* doFree = */ false, /* slice = */ true);
}
UnpooledUnsafeDirectByteBuf(ByteBufAllocator alloc, ByteBuffer initialBuffer, int maxCapacity, boolean doFree) {
super(maxCapacity);
if (alloc == null) {
throw new NullPointerException("alloc");
}
if (initialBuffer == null) {
throw new NullPointerException("initialBuffer");
}
if (!initialBuffer.isDirect()) {
throw new IllegalArgumentException("initialBuffer is not a direct buffer.");
}
if (initialBuffer.isReadOnly()) {
throw new IllegalArgumentException("initialBuffer is a read-only buffer.");
}
int initialCapacity = initialBuffer.remaining();
if (initialCapacity > maxCapacity) {
throw new IllegalArgumentException(String.format(
"initialCapacity(%d) > maxCapacity(%d)", initialCapacity, maxCapacity));
}
this.alloc = alloc;
doNotFree = !doFree;
setByteBuffer(initialBuffer.order(ByteOrder.BIG_ENDIAN), false);
writerIndex(initialCapacity);
}
/**
* Allocate a new direct {@link ByteBuffer} with the given initialCapacity.
*/
protected ByteBuffer allocateDirect(int initialCapacity) {
return ByteBuffer.allocateDirect(initialCapacity);
}
/**
* Free a direct {@link ByteBuffer}
*/
protected void freeDirect(ByteBuffer buffer) {
PlatformDependent.freeDirectBuffer(buffer);
super(alloc, initialBuffer, maxCapacity, doFree, false);
}
@Override
final void setByteBuffer(ByteBuffer buffer, boolean tryFree) {
if (tryFree) {
ByteBuffer oldBuffer = this.buffer;
if (oldBuffer != null) {
if (doNotFree) {
doNotFree = false;
} else {
freeDirect(oldBuffer);
}
}
}
this.buffer = buffer;
super.setByteBuffer(buffer, tryFree);
memoryAddress = PlatformDependent.directBufferAddress(buffer);
tmpNioBuf = null;
capacity = buffer.remaining();
}
@Override
public boolean isDirect() {
return true;
}
@Override
public int capacity() {
return capacity;
}
@Override
public ByteBuf capacity(int newCapacity) {
checkNewCapacity(newCapacity);
int readerIndex = readerIndex();
int writerIndex = writerIndex();
int oldCapacity = capacity;
if (newCapacity > oldCapacity) {
ByteBuffer oldBuffer = buffer;
ByteBuffer newBuffer = allocateDirect(newCapacity);
oldBuffer.position(0).limit(oldBuffer.capacity());
newBuffer.position(0).limit(oldBuffer.capacity());
newBuffer.put(oldBuffer);
newBuffer.clear();
setByteBuffer(newBuffer, true);
} else if (newCapacity < oldCapacity) {
ByteBuffer oldBuffer = buffer;
ByteBuffer newBuffer = allocateDirect(newCapacity);
if (readerIndex < newCapacity) {
if (writerIndex > newCapacity) {
writerIndex(writerIndex = newCapacity);
}
oldBuffer.position(readerIndex).limit(writerIndex);
newBuffer.position(readerIndex).limit(writerIndex);
newBuffer.put(oldBuffer);
newBuffer.clear();
} else {
setIndex(newCapacity, newCapacity);
}
setByteBuffer(newBuffer, true);
}
return this;
}
@Override
public ByteBufAllocator alloc() {
return alloc;
}
@Override
public ByteOrder order() {
return ByteOrder.BIG_ENDIAN;
}
@Override
public boolean hasArray() {
return false;
}
@Override
public byte[] array() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
public int arrayOffset() {
throw new UnsupportedOperationException("direct buffer");
}
@Override
@ -225,11 +80,23 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
return memoryAddress;
}
@Override
public byte getByte(int index) {
checkIndex(index);
return _getByte(index);
}
@Override
protected byte _getByte(int index) {
return UnsafeByteBufUtil.getByte(addr(index));
}
@Override
public short getShort(int index) {
checkIndex(index, 2);
return _getShort(index);
}
@Override
protected short _getShort(int index) {
return UnsafeByteBufUtil.getShort(addr(index));
@ -240,6 +107,12 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
return UnsafeByteBufUtil.getShortLE(addr(index));
}
@Override
public int getUnsignedMedium(int index) {
checkIndex(index, 3);
return _getUnsignedMedium(index);
}
@Override
protected int _getUnsignedMedium(int index) {
return UnsafeByteBufUtil.getUnsignedMedium(addr(index));
@ -250,6 +123,12 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
return UnsafeByteBufUtil.getUnsignedMediumLE(addr(index));
}
@Override
public int getInt(int index) {
checkIndex(index, 4);
return _getInt(index);
}
@Override
protected int _getInt(int index) {
return UnsafeByteBufUtil.getInt(addr(index));
@ -260,6 +139,12 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
return UnsafeByteBufUtil.getIntLE(addr(index));
}
@Override
public long getLong(int index) {
checkIndex(index, 8);
return _getLong(index);
}
@Override
protected long _getLong(int index) {
return UnsafeByteBufUtil.getLong(addr(index));
@ -277,23 +162,19 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
}
@Override
public ByteBuf getBytes(int index, byte[] dst, int dstIndex, int length) {
void getBytes(int index, byte[] dst, int dstIndex, int length, boolean internal) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst, dstIndex, length);
return this;
}
@Override
public ByteBuf getBytes(int index, ByteBuffer dst) {
void getBytes(int index, ByteBuffer dst, boolean internal) {
UnsafeByteBufUtil.getBytes(this, addr(index), index, dst);
return this;
}
@Override
public ByteBuf readBytes(ByteBuffer dst) {
int length = dst.remaining();
checkReadableBytes(length);
getBytes(readerIndex, dst);
readerIndex += length;
public ByteBuf setByte(int index, int value) {
checkIndex(index);
_setByte(index, value);
return this;
}
@ -302,6 +183,13 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
UnsafeByteBufUtil.setByte(addr(index), value);
}
@Override
public ByteBuf setShort(int index, int value) {
checkIndex(index, 2);
_setShort(index, value);
return this;
}
@Override
protected void _setShort(int index, int value) {
UnsafeByteBufUtil.setShort(addr(index), value);
@ -312,6 +200,13 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
UnsafeByteBufUtil.setShortLE(addr(index), value);
}
@Override
public ByteBuf setMedium(int index, int value) {
checkIndex(index, 3);
_setMedium(index, value);
return this;
}
@Override
protected void _setMedium(int index, int value) {
UnsafeByteBufUtil.setMedium(addr(index), value);
@ -322,6 +217,13 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
UnsafeByteBufUtil.setMediumLE(addr(index), value);
}
@Override
public ByteBuf setInt(int index, int value) {
checkIndex(index, 4);
_setInt(index, value);
return this;
}
@Override
protected void _setInt(int index, int value) {
UnsafeByteBufUtil.setInt(addr(index), value);
@ -332,6 +234,13 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
UnsafeByteBufUtil.setIntLE(addr(index), value);
}
@Override
public ByteBuf setLong(int index, long value) {
checkIndex(index, 8);
_setLong(index, value);
return this;
}
@Override
protected void _setLong(int index, long value) {
UnsafeByteBufUtil.setLong(addr(index), value);
@ -361,62 +270,8 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
}
@Override
public ByteBuf getBytes(int index, OutputStream out, int length) throws IOException {
void getBytes(int index, OutputStream out, int length, boolean internal) throws IOException {
UnsafeByteBufUtil.getBytes(this, addr(index), index, out, length);
return this;
}
@Override
public int getBytes(int index, GatheringByteChannel out, int length) throws IOException {
return getBytes(index, out, length, false);
}
private int getBytes(int index, GatheringByteChannel out, int length, boolean internal) throws IOException {
ensureAccessible();
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf;
if (internal) {
tmpBuf = internalNioBuffer();
} else {
tmpBuf = buffer.duplicate();
}
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf);
}
@Override
public int getBytes(int index, FileChannel out, long position, int length) throws IOException {
return getBytes(index, out, position, length, false);
}
private int getBytes(int index, FileChannel out, long position, int length, boolean internal) throws IOException {
ensureAccessible();
if (length == 0) {
return 0;
}
ByteBuffer tmpBuf = internal ? internalNioBuffer() : buffer.duplicate();
tmpBuf.clear().position(index).limit(index + length);
return out.write(tmpBuf, position);
}
@Override
public int readBytes(GatheringByteChannel out, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
public int readBytes(FileChannel out, long position, int length) throws IOException {
checkReadableBytes(length);
int readBytes = getBytes(readerIndex, out, position, length, true);
readerIndex += readBytes;
return readBytes;
}
@Override
@ -424,85 +279,12 @@ public class UnpooledUnsafeDirectByteBuf extends AbstractReferenceCountedByteBuf
return UnsafeByteBufUtil.setBytes(this, addr(index), index, in, length);
}
@Override
public int setBytes(int index, ScatteringByteChannel in, int length) throws IOException {
ensureAccessible();
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int setBytes(int index, FileChannel in, long position, int length) throws IOException {
ensureAccessible();
ByteBuffer tmpBuf = internalNioBuffer();
tmpBuf.clear().position(index).limit(index + length);
try {
return in.read(tmpBuf, position);
} catch (ClosedChannelException ignored) {
return -1;
}
}
@Override
public int nioBufferCount() {
return 1;
}
@Override
public ByteBuffer[] nioBuffers(int index, int length) {
return new ByteBuffer[] { nioBuffer(index, length) };
}
@Override
public ByteBuf copy(int index, int length) {
return UnsafeByteBufUtil.copy(this, addr(index), index, length);
}
@Override
public ByteBuffer internalNioBuffer(int index, int length) {
checkIndex(index, length);
return (ByteBuffer) internalNioBuffer().clear().position(index).limit(index + length);
}
private ByteBuffer internalNioBuffer() {
ByteBuffer tmpNioBuf = this.tmpNioBuf;
if (tmpNioBuf == null) {
this.tmpNioBuf = tmpNioBuf = buffer.duplicate();
}
return tmpNioBuf;
}
@Override
public ByteBuffer nioBuffer(int index, int length) {
checkIndex(index, length);
return ((ByteBuffer) buffer.duplicate().position(index).limit(index + length)).slice();
}
@Override
protected void deallocate() {
ByteBuffer buffer = this.buffer;
if (buffer == null) {
return;
}
this.buffer = null;
if (!doNotFree) {
freeDirect(buffer);
}
}
@Override
public ByteBuf unwrap() {
return null;
}
long addr(int index) {
final long addr(int index) {
return memoryAddress + index;
}

View File

@ -17,7 +17,12 @@ package io.netty.buffer;
import io.netty.util.internal.PlatformDependent;
class UnpooledUnsafeHeapByteBuf extends UnpooledHeapByteBuf {
/**
* Big endian Java heap buffer implementation. It is recommended to use
* {@link UnpooledByteBufAllocator#heapBuffer(int, int)}, {@link Unpooled#buffer(int)} and
* {@link Unpooled#wrappedBuffer(byte[])} instead of calling the constructor explicitly.
*/
public class UnpooledUnsafeHeapByteBuf extends UnpooledHeapByteBuf {
/**
* Creates a new heap buffer with a newly allocated byte array.
@ -25,7 +30,7 @@ class UnpooledUnsafeHeapByteBuf extends UnpooledHeapByteBuf {
* @param initialCapacity the initial capacity of the underlying byte array
* @param maxCapacity the max capacity of the underlying byte array
*/
UnpooledUnsafeHeapByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
public UnpooledUnsafeHeapByteBuf(ByteBufAllocator alloc, int initialCapacity, int maxCapacity) {
super(alloc, initialCapacity, maxCapacity);
}

View File

@ -151,6 +151,11 @@ class WrappedByteBuf extends ByteBuf {
return buf.maxWritableBytes();
}
@Override
public int maxFastWritableBytes() {
return buf.maxFastWritableBytes();
}
@Override
public final boolean isReadable() {
return buf.isReadable();
@ -1033,4 +1038,9 @@ class WrappedByteBuf extends ByteBuf {
public boolean release(int decrement) {
return buf.release(decrement);
}
@Override
final boolean isAccessible() {
return buf.isAccessible();
}
}

View File

@ -98,6 +98,11 @@ class WrappedCompositeByteBuf extends CompositeByteBuf {
return wrapped.maxWritableBytes();
}
@Override
public int maxFastWritableBytes() {
return wrapped.maxFastWritableBytes();
}
@Override
public int ensureWritable(int minWritableBytes, boolean force) {
return wrapped.ensureWritable(minWritableBytes, force);
@ -424,8 +429,8 @@ class WrappedCompositeByteBuf extends CompositeByteBuf {
}
@Override
int internalRefCnt() {
return wrapped.internalRefCnt();
final boolean isAccessible() {
return wrapped.isAccessible();
}
@Override
@ -548,6 +553,12 @@ class WrappedCompositeByteBuf extends CompositeByteBuf {
return this;
}
@Override
public CompositeByteBuf addFlattenedComponents(boolean increaseWriterIndex, ByteBuf buffer) {
wrapped.addFlattenedComponents(increaseWriterIndex, buffer);
return this;
}
@Override
public CompositeByteBuf removeComponent(int cIndex) {
wrapped.removeComponent(cIndex);

View File

@ -4878,4 +4878,16 @@ public abstract class AbstractByteBufTest {
buffer.release();
}
}
@Test
public void testMaxFastWritableBytes() {
ByteBuf buffer = newBuffer(150, 500).writerIndex(100);
assertEquals(50, buffer.writableBytes());
assertEquals(150, buffer.capacity());
assertEquals(500, buffer.maxCapacity());
assertEquals(400, buffer.maxWritableBytes());
// Default implementation has fast writable == writable
assertEquals(50, buffer.maxFastWritableBytes());
buffer.release();
}
}

View File

@ -52,6 +52,8 @@ import static org.junit.Assert.fail;
*/
public abstract class AbstractCompositeByteBufTest extends AbstractByteBufTest {
private static final ByteBufAllocator ALLOC = UnpooledByteBufAllocator.DEFAULT;
private final ByteOrder order;
protected AbstractCompositeByteBufTest(ByteOrder order) {
@ -133,6 +135,41 @@ public abstract class AbstractCompositeByteBufTest extends AbstractByteBufTest {
buf.release();
}
@Test
public void testToComponentIndex() {
CompositeByteBuf buf = (CompositeByteBuf) wrappedBuffer(new byte[]{1, 2, 3, 4, 5},
new byte[]{4, 5, 6, 7, 8, 9, 26}, new byte[]{10, 9, 8, 7, 6, 5, 33});
// spot checks
assertEquals(0, buf.toComponentIndex(4));
assertEquals(1, buf.toComponentIndex(5));
assertEquals(2, buf.toComponentIndex(15));
//Loop through each byte
byte index = 0;
while (index < buf.capacity()) {
int cindex = buf.toComponentIndex(index++);
assertTrue(cindex >= 0 && cindex < buf.numComponents());
}
buf.release();
}
@Test
public void testToByteIndex() {
CompositeByteBuf buf = (CompositeByteBuf) wrappedBuffer(new byte[]{1, 2, 3, 4, 5},
new byte[]{4, 5, 6, 7, 8, 9, 26}, new byte[]{10, 9, 8, 7, 6, 5, 33});
// spot checks
assertEquals(0, buf.toByteIndex(0));
assertEquals(5, buf.toByteIndex(1));
assertEquals(12, buf.toByteIndex(2));
buf.release();
}
@Test
public void testDiscardReadBytes3() {
ByteBuf a, b;
@ -745,6 +782,20 @@ public abstract class AbstractCompositeByteBufTest extends AbstractByteBufTest {
buf.release();
}
@Test
public void testRemoveComponents() {
CompositeByteBuf buf = compositeBuffer();
for (int i = 0; i < 10; i++) {
buf.addComponent(wrappedBuffer(new byte[]{1, 2}));
}
assertEquals(10, buf.numComponents());
assertEquals(20, buf.capacity());
buf.removeComponents(4, 3);
assertEquals(7, buf.numComponents());
assertEquals(14, buf.capacity());
buf.release();
}
@Test
public void testGatheringWritesHeap() throws Exception {
testGatheringWrites(buffer().order(order), buffer().order(order));
@ -926,7 +977,27 @@ public abstract class AbstractCompositeByteBufTest extends AbstractByteBufTest {
@Override
@Test
public void testInternalNioBuffer() {
// ignore
CompositeByteBuf buf = compositeBuffer();
assertEquals(0, buf.internalNioBuffer(0, 0).remaining());
// If non-derived buffer is added, its internal buffer should be returned
ByteBuf concreteBuffer = directBuffer().writeByte(1);
buf.addComponent(concreteBuffer);
assertSame(concreteBuffer.internalNioBuffer(0, 1), buf.internalNioBuffer(0, 1));
buf.release();
// In derived cases, the original internal buffer must not be used
buf = compositeBuffer();
concreteBuffer = directBuffer().writeByte(1);
buf.addComponent(concreteBuffer.slice());
assertNotSame(concreteBuffer.internalNioBuffer(0, 1), buf.internalNioBuffer(0, 1));
buf.release();
buf = compositeBuffer();
concreteBuffer = directBuffer().writeByte(1);
buf.addComponent(concreteBuffer.duplicate());
assertNotSame(concreteBuffer.internalNioBuffer(0, 1), buf.internalNioBuffer(0, 1));
buf.release();
}
@Test
@ -1045,6 +1116,71 @@ public abstract class AbstractCompositeByteBufTest extends AbstractByteBufTest {
cbuf.release();
}
@Test
public void testAddFlattenedComponents() {
ByteBuf b1 = Unpooled.wrappedBuffer(new byte[] { 1, 2, 3 });
CompositeByteBuf newComposite = Unpooled.compositeBuffer()
.addComponent(true, b1)
.addFlattenedComponents(true, b1.retain())
.addFlattenedComponents(true, Unpooled.EMPTY_BUFFER);
assertEquals(2, newComposite.numComponents());
assertEquals(6, newComposite.capacity());
assertEquals(6, newComposite.writerIndex());
// It is important to use a pooled allocator here to ensure
// the slices returned by readRetainedSlice are of type
// PooledSlicedByteBuf, which maintains an independent refcount
// (so that we can be sure to cover this case)
ByteBuf buffer = PooledByteBufAllocator.DEFAULT.buffer()
.writeBytes(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
// use mixture of slice and retained slice
ByteBuf s1 = buffer.readRetainedSlice(2);
ByteBuf s2 = s1.retainedSlice(0, 2);
ByteBuf s3 = buffer.slice(0, 2).retain();
ByteBuf s4 = s2.retainedSlice(0, 2);
buffer.release();
ByteBuf compositeToAdd = Unpooled.compositeBuffer()
.addComponent(s1)
.addComponent(Unpooled.EMPTY_BUFFER)
.addComponents(s2, s3, s4);
// set readable range to be from middle of first component
// to middle of penultimate component
compositeToAdd.setIndex(1, 5);
assertEquals(1, compositeToAdd.refCnt());
assertEquals(1, s4.refCnt());
ByteBuf compositeCopy = compositeToAdd.copy();
newComposite.addFlattenedComponents(true, compositeToAdd);
// verify that added range matches
ByteBufUtil.equals(compositeCopy, 0,
newComposite, 6, compositeCopy.readableBytes());
// should not include empty component or last component
// (latter outside of the readable range)
assertEquals(5, newComposite.numComponents());
assertEquals(10, newComposite.capacity());
assertEquals(10, newComposite.writerIndex());
assertEquals(0, compositeToAdd.refCnt());
// s4 wasn't in added range so should have been jettisoned
assertEquals(0, s4.refCnt());
assertEquals(1, newComposite.refCnt());
// releasing composite should release the remaining components
newComposite.release();
assertEquals(0, newComposite.refCnt());
assertEquals(0, s1.refCnt());
assertEquals(0, s2.refCnt());
assertEquals(0, s3.refCnt());
assertEquals(0, b1.refCnt());
}
@Test
public void testIterator() {
CompositeByteBuf cbuf = compositeBuffer();
@ -1201,6 +1337,40 @@ public abstract class AbstractCompositeByteBufTest extends AbstractByteBufTest {
assertEquals(0, b2.refCnt());
}
@Test
public void testReleasesOnShrink2() {
// It is important to use a pooled allocator here to ensure
// the slices returned by readRetainedSlice are of type
// PooledSlicedByteBuf, which maintains an independent refcount
// (so that we can be sure to cover this case)
ByteBuf buffer = PooledByteBufAllocator.DEFAULT.buffer();
buffer.writeShort(1).writeShort(2);
ByteBuf b1 = buffer.readRetainedSlice(2);
ByteBuf b2 = b1.retainedSlice(b1.readerIndex(), 2);
// composite takes ownership of b1 and b2
ByteBuf composite = Unpooled.compositeBuffer()
.addComponents(b1, b2);
assertEquals(4, composite.capacity());
// reduce capacity down to two, will drop the second component
composite.capacity(2);
assertEquals(2, composite.capacity());
// releasing composite should release the components
composite.release();
assertEquals(0, composite.refCnt());
assertEquals(0, b1.refCnt());
assertEquals(0, b2.refCnt());
// release last remaining ref to buffer
buffer.release();
assertEquals(0, buffer.refCnt());
}
@Test
public void testAllocatorIsSameWhenCopy() {
testAllocatorIsSameWhenCopy(false);
@ -1262,4 +1432,101 @@ public abstract class AbstractCompositeByteBufTest extends AbstractByteBufTest {
}
}
@Test
public void testComponentsLessThanLowerBound() {
try {
new CompositeByteBuf(ALLOC, true, 0);
fail();
} catch (IllegalArgumentException e) {
assertEquals("maxNumComponents: 0 (expected: >= 1)", e.getMessage());
}
}
@Test
public void testComponentsEqualToLowerBound() {
assertCompositeBufCreated(1);
}
@Test
public void testComponentsGreaterThanLowerBound() {
assertCompositeBufCreated(5);
}
/**
* Assert that a new {@linkplain CompositeByteBuf} was created successfully with the desired number of max
* components.
*/
private static void assertCompositeBufCreated(int expectedMaxComponents) {
CompositeByteBuf buf = new CompositeByteBuf(ALLOC, true, expectedMaxComponents);
assertEquals(expectedMaxComponents, buf.maxNumComponents());
assertTrue(buf.release());
}
@Test
public void testDiscardSomeReadBytesCorrectlyUpdatesLastAccessed() {
testDiscardCorrectlyUpdatesLastAccessed(true);
}
@Test
public void testDiscardReadBytesCorrectlyUpdatesLastAccessed() {
testDiscardCorrectlyUpdatesLastAccessed(false);
}
private static void testDiscardCorrectlyUpdatesLastAccessed(boolean discardSome) {
CompositeByteBuf cbuf = compositeBuffer();
List<ByteBuf> buffers = new ArrayList<ByteBuf>(4);
for (int i = 0; i < 4; i++) {
ByteBuf buf = buffer().writeInt(i);
cbuf.addComponent(true, buf);
buffers.add(buf);
}
// Skip the first 2 bytes which means even if we call discard*ReadBytes() later we can no drop the first
// component as it is still used.
cbuf.skipBytes(2);
if (discardSome) {
cbuf.discardSomeReadBytes();
} else {
cbuf.discardReadBytes();
}
assertEquals(4, cbuf.numComponents());
// Now skip 3 bytes which means we should be able to drop the first component on the next discard*ReadBytes()
// call.
cbuf.skipBytes(3);
if (discardSome) {
cbuf.discardSomeReadBytes();
} else {
cbuf.discardReadBytes();
}
assertEquals(3, cbuf.numComponents());
// Now skip again 3 bytes which should bring our readerIndex == start of the 3 component.
cbuf.skipBytes(3);
// Read one int (4 bytes) which should bring our readerIndex == start of the 4 component.
assertEquals(2, cbuf.readInt());
if (discardSome) {
cbuf.discardSomeReadBytes();
} else {
cbuf.discardReadBytes();
}
// Now all except the last component should have been dropped / released.
assertEquals(1, cbuf.numComponents());
assertEquals(3, cbuf.readInt());
if (discardSome) {
cbuf.discardSomeReadBytes();
} else {
cbuf.discardReadBytes();
}
assertEquals(0, cbuf.numComponents());
// These should have been released already.
for (ByteBuf buffer: buffers) {
assertEquals(0, buffer.refCnt());
}
assertTrue(cbuf.release());
}
}

View File

@ -21,6 +21,8 @@ import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.greaterThanOrEqualTo;
import static org.hamcrest.Matchers.is;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotEquals;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.fail;
public abstract class AbstractPooledByteBufTest extends AbstractByteBufTest {
@ -59,4 +61,61 @@ public abstract class AbstractPooledByteBufTest extends AbstractByteBufTest {
buf.release();
}
}
@Override
@Test
public void testMaxFastWritableBytes() {
ByteBuf buffer = newBuffer(150, 500).writerIndex(100);
assertEquals(50, buffer.writableBytes());
assertEquals(150, buffer.capacity());
assertEquals(500, buffer.maxCapacity());
assertEquals(400, buffer.maxWritableBytes());
int chunkSize = pooledByteBuf(buffer).maxLength;
assertTrue(chunkSize >= 150);
int remainingInAlloc = Math.min(chunkSize - 100, 400);
assertEquals(remainingInAlloc, buffer.maxFastWritableBytes());
// write up to max, chunk alloc should not change (same handle)
long handleBefore = pooledByteBuf(buffer).handle;
buffer.writeBytes(new byte[remainingInAlloc]);
assertEquals(handleBefore, pooledByteBuf(buffer).handle);
assertEquals(0, buffer.maxFastWritableBytes());
// writing one more should trigger a reallocation (new handle)
buffer.writeByte(7);
assertNotEquals(handleBefore, pooledByteBuf(buffer).handle);
// should not exceed maxCapacity even if chunk alloc does
buffer.capacity(500);
assertEquals(500 - buffer.writerIndex(), buffer.maxFastWritableBytes());
buffer.release();
}
private static PooledByteBuf<?> pooledByteBuf(ByteBuf buffer) {
// might need to unwrap if swapped (LE) and/or leak-aware-wrapped
while (!(buffer instanceof PooledByteBuf)) {
buffer = buffer.unwrap();
}
return (PooledByteBuf<?>) buffer;
}
@Test
public void testEnsureWritableDoesntGrowTooMuch() {
ByteBuf buffer = newBuffer(150, 500).writerIndex(100);
assertEquals(50, buffer.writableBytes());
int fastWritable = buffer.maxFastWritableBytes();
assertTrue(fastWritable > 50);
long handleBefore = pooledByteBuf(buffer).handle;
// capacity expansion should not cause reallocation
// (should grow precisely the specified amount)
buffer.ensureWritable(fastWritable);
assertEquals(handleBefore, pooledByteBuf(buffer).handle);
assertEquals(100 + fastWritable, buffer.capacity());
assertEquals(buffer.writableBytes(), buffer.maxFastWritableBytes());
buffer.release();
}
}

View File

@ -22,7 +22,6 @@ import java.nio.ByteOrder;
import java.util.Random;
import static org.hamcrest.Matchers.*;
import static org.hamcrest.Matchers.sameInstance;
import static org.junit.Assert.*;
/**

View File

@ -187,29 +187,110 @@ public class ByteBufStreamTest {
String s = in.readLine();
assertNull(s);
in.close();
ByteBuf buf2 = Unpooled.buffer();
int charCount = 7; //total chars in the string below without new line characters
byte[] abc = "\na\n\nb\r\nc\nd\ne".getBytes(utf8);
buf.writeBytes(abc);
in.mark(charCount);
assertEquals("", in.readLine());
assertEquals("a", in.readLine());
assertEquals("", in.readLine());
assertEquals("b", in.readLine());
assertEquals("c", in.readLine());
assertEquals("d", in.readLine());
assertEquals("e", in.readLine());
buf2.writeBytes(abc);
ByteBufInputStream in2 = new ByteBufInputStream(buf2, true);
in2.mark(charCount);
assertEquals("", in2.readLine());
assertEquals("a", in2.readLine());
assertEquals("", in2.readLine());
assertEquals("b", in2.readLine());
assertEquals("c", in2.readLine());
assertEquals("d", in2.readLine());
assertEquals("e", in2.readLine());
assertNull(in.readLine());
in.reset();
in2.reset();
int count = 0;
while (in.readLine() != null) {
while (in2.readLine() != null) {
++count;
if (count > charCount) {
fail("readLine() should have returned null");
}
}
assertEquals(charCount, count);
in2.close();
}
@Test
public void testRead() throws Exception {
// case1
ByteBuf buf = Unpooled.buffer(16);
buf.writeBytes(new byte[]{1, 2, 3, 4, 5, 6});
ByteBufInputStream in = new ByteBufInputStream(buf, 3);
assertEquals(1, in.read());
assertEquals(2, in.read());
assertEquals(3, in.read());
assertEquals(-1, in.read());
assertEquals(-1, in.read());
assertEquals(-1, in.read());
buf.release();
in.close();
// case2
ByteBuf buf2 = Unpooled.buffer(16);
buf2.writeBytes(new byte[]{1, 2, 3, 4, 5, 6});
ByteBufInputStream in2 = new ByteBufInputStream(buf2, 4);
assertEquals(1, in2.read());
assertEquals(2, in2.read());
assertEquals(3, in2.read());
assertEquals(4, in2.read());
assertNotEquals(5, in2.read());
assertEquals(-1, in2.read());
buf2.release();
in2.close();
}
@Test
public void testReadLineLengthRespected1() throws Exception {
// case1
ByteBuf buf = Unpooled.buffer(16);
buf.writeBytes(new byte[] { 1, 2, 3, 4, 5, 6 });
ByteBufInputStream in = new ByteBufInputStream(buf, 0);
assertNull(in.readLine());
buf.release();
in.close();
}
@Test
public void testReadLineLengthRespected2() throws Exception {
ByteBuf buf2 = Unpooled.buffer(16);
buf2.writeBytes(new byte[] { 'A', 'B', '\n', 'C', 'E', 'F'});
ByteBufInputStream in2 = new ByteBufInputStream(buf2, 4);
assertEquals("AB", in2.readLine());
assertEquals("C", in2.readLine());
assertNull(in2.readLine());
buf2.release();
in2.close();
}
@Test(expected = EOFException.class)
public void testReadByteLengthRespected() throws Exception {
// case1
ByteBuf buf = Unpooled.buffer(16);
buf.writeBytes(new byte[] { 1, 2, 3, 4, 5, 6 });
ByteBufInputStream in = new ByteBufInputStream(buf, 0);
try {
in.readByte();
} finally {
buf.release();
in.close();
}
}
}

View File

@ -510,6 +510,97 @@ public class ByteBufUtilTest {
assertTrue(buf instanceof WrappedByteBuf);
}
@Test
public void testWriteUtf8Subsequence() {
String usAscii = "Some UTF-8 like äÄ∏ŒŒ";
ByteBuf buf = Unpooled.buffer(16);
buf.writeBytes(usAscii.substring(5, 18).getBytes(CharsetUtil.UTF_8));
ByteBuf buf2 = Unpooled.buffer(16);
ByteBufUtil.writeUtf8(buf2, usAscii, 5, 18);
assertEquals(buf, buf2);
buf.release();
buf2.release();
}
@Test
public void testReserveAndWriteUtf8Subsequence() {
String usAscii = "Some UTF-8 like äÄ∏ŒŒ";
ByteBuf buf = Unpooled.buffer(16);
buf.writeBytes(usAscii.substring(5, 18).getBytes(CharsetUtil.UTF_8));
ByteBuf buf2 = Unpooled.buffer(16);
int count = ByteBufUtil.reserveAndWriteUtf8(buf2, usAscii, 5, 18, 16);
assertEquals(buf, buf2);
assertEquals(buf.readableBytes(), count);
buf.release();
buf2.release();
}
@Test
public void testUtf8BytesSubsequence() {
String usAscii = "Some UTF-8 like äÄ∏ŒŒ";
assertEquals(usAscii.substring(5, 18).getBytes(CharsetUtil.UTF_8).length,
ByteBufUtil.utf8Bytes(usAscii, 5, 18));
}
private static int[][] INVALID_RANGES = new int[][] {
{ -1, 5 }, { 5, 30 }, { 10, 5 }
};
interface TestMethod {
int invoke(Object... args);
}
private void testInvalidSubsequences(TestMethod method) {
for (int [] range : INVALID_RANGES) {
ByteBuf buf = Unpooled.buffer(16);
try {
method.invoke(buf, "Some UTF-8 like äÄ∏ŒŒ", range[0], range[1]);
fail("Did not throw IndexOutOfBoundsException for range (" + range[0] + ", " + range[1] + ")");
} catch (IndexOutOfBoundsException iiobe) {
// expected
} finally {
assertFalse(buf.isReadable());
buf.release();
}
}
}
@Test
public void testWriteUtf8InvalidSubsequences() {
testInvalidSubsequences(new TestMethod() {
@Override
public int invoke(Object... args) {
return ByteBufUtil.writeUtf8((ByteBuf) args[0], (String) args[1],
(Integer) args[2], (Integer) args[3]);
}
});
}
@Test
public void testReserveAndWriteUtf8InvalidSubsequences() {
testInvalidSubsequences(new TestMethod() {
@Override
public int invoke(Object... args) {
return ByteBufUtil.reserveAndWriteUtf8((ByteBuf) args[0], (String) args[1],
(Integer) args[2], (Integer) args[3], 32);
}
});
}
@Test
public void testUtf8BytesInvalidSubsequences() {
testInvalidSubsequences(new TestMethod() {
@Override
public int invoke(Object... args) {
return ByteBufUtil.utf8Bytes((String) args[1], (Integer) args[2], (Integer) args[3]);
}
});
}
@Test
public void testDecodeUsAscii() {
testDecodeString("This is a test", CharsetUtil.US_ASCII);

View File

@ -14,6 +14,7 @@
* under the License.
*/package io.netty.buffer;
import io.netty.util.CharsetUtil;
import org.junit.Test;
import static org.hamcrest.Matchers.*;
@ -93,4 +94,11 @@ public class EmptyByteBufTest {
assertTrue(emptyAbstract.release());
assertFalse(empty.release());
}
@Test
public void testGetCharSequence() {
EmptyByteBuf empty = new EmptyByteBuf(UnpooledByteBufAllocator.DEFAULT);
assertEquals("", empty.readCharSequence(0, CharsetUtil.US_ASCII));
}
}

View File

@ -16,6 +16,7 @@
package io.netty.buffer;
import io.netty.util.internal.PlatformDependent;
import org.junit.Assert;
import org.junit.Test;
@ -43,6 +44,25 @@ public class PoolArenaTest {
}
}
@Test
public void testDirectArenaOffsetCacheLine() throws Exception {
int capacity = 5;
int alignment = 128;
for (int i = 0; i < 1000; i++) {
ByteBuffer bb = PlatformDependent.useDirectBufferNoCleaner()
? PlatformDependent.allocateDirectNoCleaner(capacity + alignment)
: ByteBuffer.allocateDirect(capacity + alignment);
PoolArena.DirectArena arena = new PoolArena.DirectArena(null, 0, 0, 9, 9, alignment);
int offset = arena.offsetCacheLine(bb);
long address = PlatformDependent.directBufferAddress(bb);
Assert.assertEquals(0, (offset + address) & (alignment - 1));
PlatformDependent.freeDirectBuffer(bb);
}
}
@Test
public final void testAllocationCounter() {
final PooledByteBufAllocator allocator = new PooledByteBufAllocator(

View File

@ -63,6 +63,21 @@ public class PooledByteBufAllocatorTest extends AbstractByteBufAllocatorTest<Poo
return allocator.metric().chunkSize();
}
@Test
public void testTrim() {
PooledByteBufAllocator allocator = newAllocator(true);
// Should return false as we never allocated from this thread yet.
assertFalse(allocator.trimCurrentThreadCache());
ByteBuf directBuffer = allocator.directBuffer();
assertTrue(directBuffer.release());
// Should return true now a cache exists for the calling thread.
assertTrue(allocator.trimCurrentThreadCache());
}
@Test
public void testPooledUnsafeHeapBufferAndUnsafeDirectBuffer() {
PooledByteBufAllocator allocator = newAllocator(true);
@ -77,6 +92,25 @@ public class PooledByteBufAllocatorTest extends AbstractByteBufAllocatorTest<Poo
heapBuffer.release();
}
@Test
public void testIOBuffersAreDirectWhenUnsafeAvailableOrDirectBuffersPooled() {
PooledByteBufAllocator allocator = newAllocator(true);
ByteBuf ioBuffer = allocator.ioBuffer();
assertTrue(ioBuffer.isDirect());
ioBuffer.release();
PooledByteBufAllocator unpooledAllocator = newUnpooledAllocator();
ioBuffer = unpooledAllocator.ioBuffer();
if (PlatformDependent.hasUnsafe()) {
assertTrue(ioBuffer.isDirect());
} else {
assertFalse(ioBuffer.isDirect());
}
ioBuffer.release();
}
@Test
public void testWithoutUseCacheForAllThreads() {
assertFalse(Thread.currentThread() instanceof FastThreadLocalThread);
@ -430,8 +464,14 @@ public class PooledByteBufAllocatorTest extends AbstractByteBufAllocatorTest<Poo
Thread.sleep(100);
}
} finally {
// First mark all AllocationThreads to complete their work and then wait until these are complete
// and rethrow if there was any error.
for (AllocationThread t : threads) {
t.finish();
t.markAsFinished();
}
for (AllocationThread t: threads) {
t.joinAndCheckForError();
}
}
}
@ -461,7 +501,7 @@ public class PooledByteBufAllocatorTest extends AbstractByteBufAllocatorTest<Poo
private final ByteBufAllocator allocator;
private final AtomicReference<Object> finish = new AtomicReference<Object>();
public AllocationThread(ByteBufAllocator allocator) {
AllocationThread(ByteBufAllocator allocator) {
this.allocator = allocator;
}
@ -494,14 +534,17 @@ public class PooledByteBufAllocatorTest extends AbstractByteBufAllocatorTest<Poo
}
}
public boolean isFinished() {
boolean isFinished() {
return finish.get() != null;
}
public void finish() throws Throwable {
void markAsFinished() {
finish.compareAndSet(null, Boolean.TRUE);
}
void joinAndCheckForError() throws Throwable {
try {
// Mark as finish if not already done but ensure we not override the previous set error.
finish.compareAndSet(null, Boolean.TRUE);
join();
} finally {
releaseBuffers();
@ -509,7 +552,7 @@ public class PooledByteBufAllocatorTest extends AbstractByteBufAllocatorTest<Poo
checkForError();
}
public void checkForError() throws Throwable {
void checkForError() throws Throwable {
Object obj = finish.get();
if (obj instanceof Throwable) {
throw (Throwable) obj;

View File

@ -236,6 +236,20 @@ public class ReadOnlyDirectByteBufferBufTest {
buf.release();
}
@Test(expected = IndexOutOfBoundsException.class)
public void testGetBytesByteBuffer() {
byte[] bytes = {'a', 'b', 'c', 'd', 'e', 'f', 'g'};
// Ensure destination buffer is bigger then what is in the ByteBuf.
ByteBuffer nioBuffer = ByteBuffer.allocate(bytes.length + 1);
ByteBuf buffer = buffer(((ByteBuffer) allocate(bytes.length)
.put(bytes).flip()).asReadOnlyBuffer());
try {
buffer.getBytes(buffer.readerIndex(), nioBuffer);
} finally {
buffer.release();
}
}
@Test
public void testCopy() {
ByteBuf buf = buffer(((ByteBuffer) allocate(16).putLong(1).putLong(2).flip()).asReadOnlyBuffer());

View File

@ -20,7 +20,7 @@
<parent>
<groupId>io.netty</groupId>
<artifactId>netty-parent</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</parent>
<artifactId>netty-codec-dns</artifactId>
@ -33,6 +33,21 @@
</properties>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-common</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-buffer</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-codec</artifactId>

View File

@ -21,6 +21,7 @@ import io.netty.util.internal.UnstableApi;
import java.net.IDN;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
/**
* A skeletal implementation of {@link DnsRecord}.
@ -62,9 +63,7 @@ public abstract class AbstractDnsRecord implements DnsRecord {
* @param timeToLive the TTL value of the record
*/
protected AbstractDnsRecord(String name, DnsRecordType type, int dnsClass, long timeToLive) {
if (timeToLive < 0) {
throw new IllegalArgumentException("timeToLive: " + timeToLive + " (expected: >= 0)");
}
checkPositiveOrZero(timeToLive, "timeToLive");
// Convert to ASCII which will also check that the length is not too big.
// See:
// - https://github.com/netty/netty/issues/4937

View File

@ -26,8 +26,6 @@ import io.netty.util.internal.UnstableApi;
import java.net.InetSocketAddress;
import java.util.List;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
/**
* Encodes a {@link DatagramDnsQuery} (or an {@link AddressedEnvelope} of {@link DnsQuery}} into a
* {@link DatagramPacket}.
@ -36,7 +34,7 @@ import static io.netty.util.internal.ObjectUtil.checkNotNull;
@ChannelHandler.Sharable
public class DatagramDnsQueryEncoder extends MessageToMessageEncoder<AddressedEnvelope<DnsQuery, InetSocketAddress>> {
private final DnsRecordEncoder recordEncoder;
private final DnsQueryEncoder encoder;
/**
* Creates a new encoder with {@linkplain DnsRecordEncoder#DEFAULT the default record encoder}.
@ -49,7 +47,7 @@ public class DatagramDnsQueryEncoder extends MessageToMessageEncoder<AddressedEn
* Creates a new encoder with the specified {@code recordEncoder}.
*/
public DatagramDnsQueryEncoder(DnsRecordEncoder recordEncoder) {
this.recordEncoder = checkNotNull(recordEncoder, "recordEncoder");
this.encoder = new DnsQueryEncoder(recordEncoder);
}
@Override
@ -63,9 +61,7 @@ public class DatagramDnsQueryEncoder extends MessageToMessageEncoder<AddressedEn
boolean success = false;
try {
encodeHeader(query, buf);
encodeQuestions(query, buf);
encodeRecords(query, DnsSection.ADDITIONAL, buf);
encoder.encode(query, buf);
success = true;
} finally {
if (!success) {
@ -85,38 +81,4 @@ public class DatagramDnsQueryEncoder extends MessageToMessageEncoder<AddressedEn
@SuppressWarnings("unused") AddressedEnvelope<DnsQuery, InetSocketAddress> msg) throws Exception {
return ctx.alloc().ioBuffer(1024);
}
/**
* Encodes the header that is always 12 bytes long.
*
* @param query the query header being encoded
* @param buf the buffer the encoded data should be written to
*/
private static void encodeHeader(DnsQuery query, ByteBuf buf) {
buf.writeShort(query.id());
int flags = 0;
flags |= (query.opCode().byteValue() & 0xFF) << 14;
if (query.isRecursionDesired()) {
flags |= 1 << 8;
}
buf.writeShort(flags);
buf.writeShort(query.count(DnsSection.QUESTION));
buf.writeShort(0); // answerCount
buf.writeShort(0); // authorityResourceCount
buf.writeShort(query.count(DnsSection.ADDITIONAL));
}
private void encodeQuestions(DnsQuery query, ByteBuf buf) throws Exception {
final int count = query.count(DnsSection.QUESTION);
for (int i = 0; i < count; i++) {
recordEncoder.encodeQuestion((DnsQuestion) query.recordAt(DnsSection.QUESTION, i), buf);
}
}
private void encodeRecords(DnsQuery query, DnsSection section, ByteBuf buf) throws Exception {
final int count = query.count(section);
for (int i = 0; i < count; i++) {
recordEncoder.encodeRecord(query.recordAt(section, i), buf);
}
}
}

View File

@ -15,18 +15,15 @@
*/
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.socket.DatagramPacket;
import io.netty.handler.codec.CorruptedFrameException;
import io.netty.handler.codec.MessageToMessageDecoder;
import io.netty.util.internal.UnstableApi;
import java.net.InetSocketAddress;
import java.util.List;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
/**
* Decodes a {@link DatagramPacket} into a {@link DatagramDnsResponse}.
*/
@ -34,7 +31,7 @@ import static io.netty.util.internal.ObjectUtil.checkNotNull;
@ChannelHandler.Sharable
public class DatagramDnsResponseDecoder extends MessageToMessageDecoder<DatagramPacket> {
private final DnsRecordDecoder recordDecoder;
private final DnsResponseDecoder<InetSocketAddress> responseDecoder;
/**
* Creates a new decoder with {@linkplain DnsRecordDecoder#DEFAULT the default record decoder}.
@ -47,73 +44,17 @@ public class DatagramDnsResponseDecoder extends MessageToMessageDecoder<Datagram
* Creates a new decoder with the specified {@code recordDecoder}.
*/
public DatagramDnsResponseDecoder(DnsRecordDecoder recordDecoder) {
this.recordDecoder = checkNotNull(recordDecoder, "recordDecoder");
this.responseDecoder = new DnsResponseDecoder<InetSocketAddress>(recordDecoder) {
@Override
protected DnsResponse newResponse(InetSocketAddress sender, InetSocketAddress recipient,
int id, DnsOpCode opCode, DnsResponseCode responseCode) {
return new DatagramDnsResponse(sender, recipient, id, opCode, responseCode);
}
};
}
@Override
protected void decode(ChannelHandlerContext ctx, DatagramPacket packet, List<Object> out) throws Exception {
final ByteBuf buf = packet.content();
final DnsResponse response = newResponse(packet, buf);
boolean success = false;
try {
final int questionCount = buf.readUnsignedShort();
final int answerCount = buf.readUnsignedShort();
final int authorityRecordCount = buf.readUnsignedShort();
final int additionalRecordCount = buf.readUnsignedShort();
decodeQuestions(response, buf, questionCount);
decodeRecords(response, DnsSection.ANSWER, buf, answerCount);
decodeRecords(response, DnsSection.AUTHORITY, buf, authorityRecordCount);
decodeRecords(response, DnsSection.ADDITIONAL, buf, additionalRecordCount);
out.add(response);
success = true;
} finally {
if (!success) {
response.release();
}
}
}
private static DnsResponse newResponse(DatagramPacket packet, ByteBuf buf) {
final int id = buf.readUnsignedShort();
final int flags = buf.readUnsignedShort();
if (flags >> 15 == 0) {
throw new CorruptedFrameException("not a response");
}
final DnsResponse response = new DatagramDnsResponse(
packet.sender(),
packet.recipient(),
id,
DnsOpCode.valueOf((byte) (flags >> 11 & 0xf)), DnsResponseCode.valueOf((byte) (flags & 0xf)));
response.setRecursionDesired((flags >> 8 & 1) == 1);
response.setAuthoritativeAnswer((flags >> 10 & 1) == 1);
response.setTruncated((flags >> 9 & 1) == 1);
response.setRecursionAvailable((flags >> 7 & 1) == 1);
response.setZ(flags >> 4 & 0x7);
return response;
}
private void decodeQuestions(DnsResponse response, ByteBuf buf, int questionCount) throws Exception {
for (int i = questionCount; i > 0; i --) {
response.addRecord(DnsSection.QUESTION, recordDecoder.decodeQuestion(buf));
}
}
private void decodeRecords(
DnsResponse response, DnsSection section, ByteBuf buf, int count) throws Exception {
for (int i = count; i > 0; i --) {
final DnsRecord r = recordDecoder.decodeRecord(buf);
if (r == null) {
// Truncated response
break;
}
response.addRecord(section, r);
}
out.add(responseDecoder.decode(packet.sender(), packet.recipient(), packet.content()));
}
}

View File

@ -16,8 +16,6 @@
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.CorruptedFrameException;
import io.netty.util.CharsetUtil;
import io.netty.util.internal.UnstableApi;
/**
@ -98,6 +96,11 @@ public class DefaultDnsRecordDecoder implements DnsRecordDecoder {
return new DefaultDnsPtrRecord(
name, dnsClass, timeToLive, decodeName0(in.duplicate().setIndex(offset, offset + length)));
}
if (type == DnsRecordType.CNAME || type == DnsRecordType.NS) {
return new DefaultDnsRawRecord(name, type, dnsClass, timeToLive,
DnsCodecUtil.decompressDomainName(
in.duplicate().setIndex(offset, offset + length)));
}
return new DefaultDnsRawRecord(
name, type, dnsClass, timeToLive, in.retainedDuplicate().setIndex(offset, offset + length));
}
@ -123,69 +126,6 @@ public class DefaultDnsRecordDecoder implements DnsRecordDecoder {
* @return the domain name for an entry
*/
public static String decodeName(ByteBuf in) {
int position = -1;
int checked = 0;
final int end = in.writerIndex();
final int readable = in.readableBytes();
// Looking at the spec we should always have at least enough readable bytes to read a byte here but it seems
// some servers do not respect this for empty names. So just workaround this and return an empty name in this
// case.
//
// See:
// - https://github.com/netty/netty/issues/5014
// - https://www.ietf.org/rfc/rfc1035.txt , Section 3.1
if (readable == 0) {
return ROOT;
}
final StringBuilder name = new StringBuilder(readable << 1);
while (in.isReadable()) {
final int len = in.readUnsignedByte();
final boolean pointer = (len & 0xc0) == 0xc0;
if (pointer) {
if (position == -1) {
position = in.readerIndex() + 1;
}
if (!in.isReadable()) {
throw new CorruptedFrameException("truncated pointer in a name");
}
final int next = (len & 0x3f) << 8 | in.readUnsignedByte();
if (next >= end) {
throw new CorruptedFrameException("name has an out-of-range pointer");
}
in.readerIndex(next);
// check for loops
checked += 2;
if (checked >= end) {
throw new CorruptedFrameException("name contains a loop.");
}
} else if (len != 0) {
if (!in.isReadable(len)) {
throw new CorruptedFrameException("truncated label in a name");
}
name.append(in.toString(in.readerIndex(), len, CharsetUtil.UTF_8)).append('.');
in.skipBytes(len);
} else { // len == 0
break;
}
}
if (position != -1) {
in.readerIndex(position);
}
if (name.length() == 0) {
return ROOT;
}
if (name.charAt(name.length() - 1) != '.') {
name.append('.');
}
return name.toString();
return DnsCodecUtil.decodeDomainName(in);
}
}

View File

@ -16,14 +16,11 @@
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufUtil;
import io.netty.channel.socket.InternetProtocolFamily;
import io.netty.handler.codec.UnsupportedMessageTypeException;
import io.netty.util.internal.StringUtil;
import io.netty.util.internal.UnstableApi;
import static io.netty.handler.codec.dns.DefaultDnsRecordDecoder.ROOT;
/**
* The default {@link DnsRecordEncoder} implementation.
*
@ -141,25 +138,7 @@ public class DefaultDnsRecordEncoder implements DnsRecordEncoder {
}
protected void encodeName(String name, ByteBuf buf) throws Exception {
if (ROOT.equals(name)) {
// Root domain
buf.writeByte(0);
return;
}
final String[] labels = name.split("\\.");
for (String label : labels) {
final int labelLen = label.length();
if (labelLen == 0) {
// zero-length label means the end of the name.
break;
}
buf.writeByte(labelLen);
ByteBufUtil.writeAscii(buf, label);
}
buf.writeByte(0); // marks end of name field
DnsCodecUtil.encodeDomainName(name, buf);
}
private static byte padWithZeros(byte b, int lowOrderBitsToPreserve) {

View File

@ -0,0 +1,132 @@
/*
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufUtil;
import io.netty.buffer.Unpooled;
import io.netty.handler.codec.CorruptedFrameException;
import io.netty.util.CharsetUtil;
import static io.netty.handler.codec.dns.DefaultDnsRecordDecoder.*;
final class DnsCodecUtil {
private DnsCodecUtil() {
// Util class
}
static void encodeDomainName(String name, ByteBuf buf) {
if (ROOT.equals(name)) {
// Root domain
buf.writeByte(0);
return;
}
final String[] labels = name.split("\\.");
for (String label : labels) {
final int labelLen = label.length();
if (labelLen == 0) {
// zero-length label means the end of the name.
break;
}
buf.writeByte(labelLen);
ByteBufUtil.writeAscii(buf, label);
}
buf.writeByte(0); // marks end of name field
}
static String decodeDomainName(ByteBuf in) {
int position = -1;
int checked = 0;
final int end = in.writerIndex();
final int readable = in.readableBytes();
// Looking at the spec we should always have at least enough readable bytes to read a byte here but it seems
// some servers do not respect this for empty names. So just workaround this and return an empty name in this
// case.
//
// See:
// - https://github.com/netty/netty/issues/5014
// - https://www.ietf.org/rfc/rfc1035.txt , Section 3.1
if (readable == 0) {
return ROOT;
}
final StringBuilder name = new StringBuilder(readable << 1);
while (in.isReadable()) {
final int len = in.readUnsignedByte();
final boolean pointer = (len & 0xc0) == 0xc0;
if (pointer) {
if (position == -1) {
position = in.readerIndex() + 1;
}
if (!in.isReadable()) {
throw new CorruptedFrameException("truncated pointer in a name");
}
final int next = (len & 0x3f) << 8 | in.readUnsignedByte();
if (next >= end) {
throw new CorruptedFrameException("name has an out-of-range pointer");
}
in.readerIndex(next);
// check for loops
checked += 2;
if (checked >= end) {
throw new CorruptedFrameException("name contains a loop.");
}
} else if (len != 0) {
if (!in.isReadable(len)) {
throw new CorruptedFrameException("truncated label in a name");
}
name.append(in.toString(in.readerIndex(), len, CharsetUtil.UTF_8)).append('.');
in.skipBytes(len);
} else { // len == 0
break;
}
}
if (position != -1) {
in.readerIndex(position);
}
if (name.length() == 0) {
return ROOT;
}
if (name.charAt(name.length() - 1) != '.') {
name.append('.');
}
return name.toString();
}
/**
* Decompress pointer data.
* @param compression comporession data
* @return decompressed data
*/
static ByteBuf decompressDomainName(ByteBuf compression) {
String domainName = decodeDomainName(compression);
ByteBuf result = compression.alloc().buffer(domainName.length() << 1);
encodeDomainName(domainName, result);
return result;
}
}

View File

@ -0,0 +1,75 @@
/*
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
final class DnsQueryEncoder {
private final DnsRecordEncoder recordEncoder;
/**
* Creates a new encoder with the specified {@code recordEncoder}.
*/
DnsQueryEncoder(DnsRecordEncoder recordEncoder) {
this.recordEncoder = checkNotNull(recordEncoder, "recordEncoder");
}
/**
* Encodes the given {@link DnsQuery} into a {@link ByteBuf}.
*/
void encode(DnsQuery query, ByteBuf out) throws Exception {
encodeHeader(query, out);
encodeQuestions(query, out);
encodeRecords(query, DnsSection.ADDITIONAL, out);
}
/**
* Encodes the header that is always 12 bytes long.
*
* @param query the query header being encoded
* @param buf the buffer the encoded data should be written to
*/
private static void encodeHeader(DnsQuery query, ByteBuf buf) {
buf.writeShort(query.id());
int flags = 0;
flags |= (query.opCode().byteValue() & 0xFF) << 14;
if (query.isRecursionDesired()) {
flags |= 1 << 8;
}
buf.writeShort(flags);
buf.writeShort(query.count(DnsSection.QUESTION));
buf.writeShort(0); // answerCount
buf.writeShort(0); // authorityResourceCount
buf.writeShort(query.count(DnsSection.ADDITIONAL));
}
private void encodeQuestions(DnsQuery query, ByteBuf buf) throws Exception {
final int count = query.count(DnsSection.QUESTION);
for (int i = 0; i < count; i++) {
recordEncoder.encodeQuestion((DnsQuestion) query.recordAt(DnsSection.QUESTION, i), buf);
}
}
private void encodeRecords(DnsQuery query, DnsSection section, ByteBuf buf) throws Exception {
final int count = query.count(section);
for (int i = 0; i < count; i++) {
recordEncoder.encodeRecord(query.recordAt(section, i), buf);
}
}
}

View File

@ -0,0 +1,97 @@
/*
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.CorruptedFrameException;
import java.net.SocketAddress;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
abstract class DnsResponseDecoder<A extends SocketAddress> {
private final DnsRecordDecoder recordDecoder;
/**
* Creates a new decoder with the specified {@code recordDecoder}.
*/
DnsResponseDecoder(DnsRecordDecoder recordDecoder) {
this.recordDecoder = checkNotNull(recordDecoder, "recordDecoder");
}
final DnsResponse decode(A sender, A recipient, ByteBuf buffer) throws Exception {
final int id = buffer.readUnsignedShort();
final int flags = buffer.readUnsignedShort();
if (flags >> 15 == 0) {
throw new CorruptedFrameException("not a response");
}
final DnsResponse response = newResponse(
sender,
recipient,
id,
DnsOpCode.valueOf((byte) (flags >> 11 & 0xf)), DnsResponseCode.valueOf((byte) (flags & 0xf)));
response.setRecursionDesired((flags >> 8 & 1) == 1);
response.setAuthoritativeAnswer((flags >> 10 & 1) == 1);
response.setTruncated((flags >> 9 & 1) == 1);
response.setRecursionAvailable((flags >> 7 & 1) == 1);
response.setZ(flags >> 4 & 0x7);
boolean success = false;
try {
final int questionCount = buffer.readUnsignedShort();
final int answerCount = buffer.readUnsignedShort();
final int authorityRecordCount = buffer.readUnsignedShort();
final int additionalRecordCount = buffer.readUnsignedShort();
decodeQuestions(response, buffer, questionCount);
decodeRecords(response, DnsSection.ANSWER, buffer, answerCount);
decodeRecords(response, DnsSection.AUTHORITY, buffer, authorityRecordCount);
decodeRecords(response, DnsSection.ADDITIONAL, buffer, additionalRecordCount);
success = true;
return response;
} finally {
if (!success) {
response.release();
}
}
}
protected abstract DnsResponse newResponse(A sender, A recipient, int id,
DnsOpCode opCode, DnsResponseCode responseCode) throws Exception;
private void decodeQuestions(DnsResponse response, ByteBuf buf, int questionCount) throws Exception {
for (int i = questionCount; i > 0; i --) {
response.addRecord(DnsSection.QUESTION, recordDecoder.decodeQuestion(buf));
}
}
private void decodeRecords(
DnsResponse response, DnsSection section, ByteBuf buf, int count) throws Exception {
for (int i = count; i > 0; i --) {
final DnsRecord r = recordDecoder.decodeRecord(buf);
if (r == null) {
// Truncated response
break;
}
response.addRecord(section, r);
}
}
}

View File

@ -0,0 +1,64 @@
/*
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandler;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.MessageToByteEncoder;
import io.netty.util.internal.UnstableApi;
@ChannelHandler.Sharable
@UnstableApi
public final class TcpDnsQueryEncoder extends MessageToByteEncoder<DnsQuery> {
private final DnsQueryEncoder encoder;
/**
* Creates a new encoder with {@linkplain DnsRecordEncoder#DEFAULT the default record encoder}.
*/
public TcpDnsQueryEncoder() {
this(DnsRecordEncoder.DEFAULT);
}
/**
* Creates a new encoder with the specified {@code recordEncoder}.
*/
public TcpDnsQueryEncoder(DnsRecordEncoder recordEncoder) {
this.encoder = new DnsQueryEncoder(recordEncoder);
}
@Override
protected void encode(ChannelHandlerContext ctx, DnsQuery msg, ByteBuf out) throws Exception {
// Length is two octets as defined by RFC-7766
// See https://tools.ietf.org/html/rfc7766#section-8
out.writerIndex(out.writerIndex() + 2);
encoder.encode(msg, out);
// Now fill in the correct length based on the amount of data that we wrote the ByteBuf.
out.setShort(0, out.readableBytes() - 2);
}
@Override
protected ByteBuf allocateBuffer(ChannelHandlerContext ctx, @SuppressWarnings("unused") DnsQuery msg,
boolean preferDirect) {
if (preferDirect) {
return ctx.alloc().ioBuffer(1024);
} else {
return ctx.alloc().heapBuffer(1024);
}
}
}

View File

@ -0,0 +1,72 @@
/*
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.LengthFieldBasedFrameDecoder;
import io.netty.util.internal.UnstableApi;
import java.net.SocketAddress;
@UnstableApi
public final class TcpDnsResponseDecoder extends LengthFieldBasedFrameDecoder {
private final DnsResponseDecoder<SocketAddress> responseDecoder;
/**
* Creates a new decoder with {@linkplain DnsRecordDecoder#DEFAULT the default record decoder}.
*/
public TcpDnsResponseDecoder() {
this(DnsRecordDecoder.DEFAULT, 64 * 1024);
}
/**
* Creates a new decoder with the specified {@code recordDecoder} and {@code maxFrameLength}
*/
public TcpDnsResponseDecoder(DnsRecordDecoder recordDecoder, int maxFrameLength) {
// Length is two octets as defined by RFC-7766
// See https://tools.ietf.org/html/rfc7766#section-8
super(maxFrameLength, 0, 2, 0, 2);
this.responseDecoder = new DnsResponseDecoder<SocketAddress>(recordDecoder) {
@Override
protected DnsResponse newResponse(SocketAddress sender, SocketAddress recipient,
int id, DnsOpCode opCode, DnsResponseCode responseCode) {
return new DefaultDnsResponse(id, opCode, responseCode);
}
};
}
@Override
protected Object decode(ChannelHandlerContext ctx, ByteBuf in) throws Exception {
ByteBuf frame = (ByteBuf) super.decode(ctx, in);
if (frame == null) {
return null;
}
try {
return responseDecoder.decode(ctx.channel().remoteAddress(), ctx.channel().localAddress(), frame.slice());
} finally {
frame.release();
}
}
@Override
protected ByteBuf extractFrame(ChannelHandlerContext ctx, ByteBuf buffer, int index, int length) {
return buffer.copy(index, length);
}
}

View File

@ -16,10 +16,11 @@
package io.netty.handler.codec.dns;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufUtil;
import io.netty.buffer.Unpooled;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.*;
public class DefaultDnsRecordDecoderTest {
@ -89,6 +90,81 @@ public class DefaultDnsRecordDecoderTest {
}
}
@Test
public void testdecompressCompressPointer() {
byte[] compressionPointer = {
5, 'n', 'e', 't', 't', 'y', 2, 'i', 'o', 0,
(byte) 0xC0, 0
};
ByteBuf buffer = Unpooled.wrappedBuffer(compressionPointer);
ByteBuf uncompressed = null;
try {
uncompressed = DnsCodecUtil.decompressDomainName(buffer.duplicate().setIndex(10, 12));
assertEquals(0, ByteBufUtil.compare(buffer.duplicate().setIndex(0, 10), uncompressed));
} finally {
buffer.release();
if (uncompressed != null) {
uncompressed.release();
}
}
}
@Test
public void testdecompressNestedCompressionPointer() {
byte[] nestedCompressionPointer = {
6, 'g', 'i', 't', 'h', 'u', 'b', 2, 'i', 'o', 0, // github.io
5, 'n', 'e', 't', 't', 'y', (byte) 0xC0, 0, // netty.github.io
(byte) 0xC0, 11, // netty.github.io
};
ByteBuf buffer = Unpooled.wrappedBuffer(nestedCompressionPointer);
ByteBuf uncompressed = null;
try {
uncompressed = DnsCodecUtil.decompressDomainName(buffer.duplicate().setIndex(19, 21));
assertEquals(0, ByteBufUtil.compare(
Unpooled.wrappedBuffer(new byte[] {
5, 'n', 'e', 't', 't', 'y', 6, 'g', 'i', 't', 'h', 'u', 'b', 2, 'i', 'o', 0
}), uncompressed));
} finally {
buffer.release();
if (uncompressed != null) {
uncompressed.release();
}
}
}
@Test
public void testDecodeCompressionRDataPointer() throws Exception {
DefaultDnsRecordDecoder decoder = new DefaultDnsRecordDecoder();
byte[] compressionPointer = {
5, 'n', 'e', 't', 't', 'y', 2, 'i', 'o', 0,
(byte) 0xC0, 0
};
ByteBuf buffer = Unpooled.wrappedBuffer(compressionPointer);
DefaultDnsRawRecord cnameRecord = null;
DefaultDnsRawRecord nsRecord = null;
try {
cnameRecord = (DefaultDnsRawRecord) decoder.decodeRecord(
"netty.github.io", DnsRecordType.CNAME, DnsRecord.CLASS_IN, 60, buffer, 10, 2);
assertEquals("The rdata of CNAME-type record should be decompressed in advance",
0, ByteBufUtil.compare(buffer.duplicate().setIndex(0, 10), cnameRecord.content()));
assertEquals("netty.io.", DnsCodecUtil.decodeDomainName(cnameRecord.content()));
nsRecord = (DefaultDnsRawRecord) decoder.decodeRecord(
"netty.github.io", DnsRecordType.NS, DnsRecord.CLASS_IN, 60, buffer, 10, 2);
assertEquals("The rdata of NS-type record should be decompressed in advance",
0, ByteBufUtil.compare(buffer.duplicate().setIndex(0, 10), nsRecord.content()));
assertEquals("netty.io.", DnsCodecUtil.decodeDomainName(nsRecord.content()));
} finally {
buffer.release();
if (cnameRecord != null) {
cnameRecord.release();
}
if (nsRecord != null) {
nsRecord.release();
}
}
}
@Test
public void testDecodeMessageCompression() throws Exception {
// See https://www.ietf.org/rfc/rfc1035 [4.1.4. Message compression]

View File

@ -20,7 +20,7 @@
<parent>
<groupId>io.netty</groupId>
<artifactId>netty-parent</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</parent>
<artifactId>netty-codec-haproxy</artifactId>
@ -33,6 +33,16 @@
</properties>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-buffer</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-codec</artifactId>

View File

@ -17,9 +17,13 @@ package io.netty.handler.codec.haproxy;
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.haproxy.HAProxyProxiedProtocol.AddressFamily;
import io.netty.util.AbstractReferenceCounted;
import io.netty.util.ByteProcessor;
import io.netty.util.CharsetUtil;
import io.netty.util.NetUtil;
import io.netty.util.ResourceLeakDetector;
import io.netty.util.ResourceLeakDetectorFactory;
import io.netty.util.ResourceLeakTracker;
import java.util.ArrayList;
import java.util.Collections;
@ -28,29 +32,11 @@ import java.util.List;
/**
* Message container for decoded HAProxy proxy protocol parameters
*/
public final class HAProxyMessage {
/**
* Version 1 proxy protocol message for 'UNKNOWN' proxied protocols. Per spec, when the proxied protocol is
* 'UNKNOWN' we must discard all other header values.
*/
private static final HAProxyMessage V1_UNKNOWN_MSG = new HAProxyMessage(
HAProxyProtocolVersion.V1, HAProxyCommand.PROXY, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0);
/**
* Version 2 proxy protocol message for 'UNKNOWN' proxied protocols. Per spec, when the proxied protocol is
* 'UNKNOWN' we must discard all other header values.
*/
private static final HAProxyMessage V2_UNKNOWN_MSG = new HAProxyMessage(
HAProxyProtocolVersion.V2, HAProxyCommand.PROXY, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0);
/**
* Version 2 proxy protocol message for local requests. Per spec, we should use an unspecified protocol and family
* for 'LOCAL' commands. Per spec, when the proxied protocol is 'UNKNOWN' we must discard all other header values.
*/
private static final HAProxyMessage V2_LOCAL_MSG = new HAProxyMessage(
HAProxyProtocolVersion.V2, HAProxyCommand.LOCAL, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0);
public final class HAProxyMessage extends AbstractReferenceCounted {
private static final ResourceLeakDetector<HAProxyMessage> leakDetector =
ResourceLeakDetectorFactory.instance().newResourceLeakDetector(HAProxyMessage.class);
private final ResourceLeakTracker<HAProxyMessage> leak;
private final HAProxyProtocolVersion protocolVersion;
private final HAProxyCommand command;
private final HAProxyProxiedProtocol proxiedProtocol;
@ -108,6 +94,8 @@ public final class HAProxyMessage {
this.sourcePort = sourcePort;
this.destinationPort = destinationPort;
this.tlvs = Collections.unmodifiableList(tlvs);
leak = leakDetector.track(this);
}
/**
@ -150,7 +138,7 @@ public final class HAProxyMessage {
}
if (cmd == HAProxyCommand.LOCAL) {
return V2_LOCAL_MSG;
return unknownMsg(HAProxyProtocolVersion.V2, HAProxyCommand.LOCAL);
}
// Per spec, the 14th byte is the protocol and address family byte
@ -162,7 +150,7 @@ public final class HAProxyMessage {
}
if (protAndFam == HAProxyProxiedProtocol.UNKNOWN) {
return V2_UNKNOWN_MSG;
return unknownMsg(HAProxyProtocolVersion.V2, HAProxyCommand.PROXY);
}
int addressInfoLen = header.readUnsignedShort();
@ -337,7 +325,7 @@ public final class HAProxyMessage {
}
if (protAndFam == HAProxyProxiedProtocol.UNKNOWN) {
return V1_UNKNOWN_MSG;
return unknownMsg(HAProxyProtocolVersion.V1, HAProxyCommand.PROXY);
}
if (numParts != 6) {
@ -349,6 +337,14 @@ public final class HAProxyMessage {
protAndFam, parts[2], parts[3], parts[4], parts[5]);
}
/**
* Proxy protocol message for 'UNKNOWN' proxied protocols. Per spec, when the proxied protocol is
* 'UNKNOWN' we must discard all other header values.
*/
private static HAProxyMessage unknownMsg(HAProxyProtocolVersion version, HAProxyCommand command) {
return new HAProxyMessage(version, command, HAProxyProxiedProtocol.UNKNOWN, null, null, 0, 0);
}
/**
* Convert ip address bytes to string representation
*
@ -358,31 +354,20 @@ public final class HAProxyMessage {
*/
private static String ipBytesToString(ByteBuf header, int addressLen) {
StringBuilder sb = new StringBuilder();
if (addressLen == 4) {
sb.append(header.readByte() & 0xff);
sb.append('.');
sb.append(header.readByte() & 0xff);
sb.append('.');
sb.append(header.readByte() & 0xff);
sb.append('.');
sb.append(header.readByte() & 0xff);
final int ipv4Len = 4;
final int ipv6Len = 8;
if (addressLen == ipv4Len) {
for (int i = 0; i < ipv4Len; i++) {
sb.append(header.readByte() & 0xff);
sb.append('.');
}
} else {
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
sb.append(Integer.toHexString(header.readUnsignedShort()));
for (int i = 0; i < ipv6Len; i++) {
sb.append(Integer.toHexString(header.readUnsignedShort()));
sb.append(':');
}
}
sb.setLength(sb.length() - 1);
return sb.toString();
}
@ -519,4 +504,63 @@ public final class HAProxyMessage {
public List<HAProxyTLV> tlvs() {
return tlvs;
}
@Override
public HAProxyMessage touch() {
tryRecord();
return (HAProxyMessage) super.touch();
}
@Override
public HAProxyMessage touch(Object hint) {
if (leak != null) {
leak.record(hint);
}
return this;
}
@Override
public HAProxyMessage retain() {
tryRecord();
return (HAProxyMessage) super.retain();
}
@Override
public HAProxyMessage retain(int increment) {
tryRecord();
return (HAProxyMessage) super.retain(increment);
}
@Override
public boolean release() {
tryRecord();
return super.release();
}
@Override
public boolean release(int decrement) {
tryRecord();
return super.release(decrement);
}
private void tryRecord() {
if (leak != null) {
leak.record();
}
}
@Override
protected void deallocate() {
try {
for (HAProxyTLV tlv : tlvs) {
tlv.release();
}
} finally {
final ResourceLeakTracker<HAProxyMessage> leak = this.leak;
if (leak != null) {
boolean closed = leak.close(this);
assert closed;
}
}
}
}

View File

@ -18,7 +18,6 @@ package io.netty.handler.codec.haproxy;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ByteToMessageDecoder;
import io.netty.handler.codec.LineBasedFrameDecoder;
import io.netty.handler.codec.ProtocolDetectionResult;
import io.netty.util.CharsetUtil;
@ -50,11 +49,6 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
*/
private static final int V2_MAX_TLV = 65535 - 216;
/**
* Version 1 header delimiter is always '\r\n' per spec
*/
private static final int DELIMITER_LENGTH = 2;
/**
* Binary header prefix
*/
@ -98,6 +92,11 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
private static final ProtocolDetectionResult<HAProxyProtocolVersion> DETECTION_RESULT_V2 =
ProtocolDetectionResult.detected(HAProxyProtocolVersion.V2);
/**
* Used to extract a header frame out of the {@link ByteBuf} and return it.
*/
private HeaderExtractor headerExtractor;
/**
* {@code true} if we're discarding input because we're already over maxLength
*/
@ -108,6 +107,11 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
*/
private int discardedBytes;
/**
* Whether or not to throw an exception as soon as we exceed maxLength.
*/
private final boolean failFast;
/**
* {@code true} if we're finished decoding the proxy protocol header
*/
@ -125,14 +129,27 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
private final int v2MaxHeaderSize;
/**
* Creates a new decoder with no additional data (TLV) restrictions
* Creates a new decoder with no additional data (TLV) restrictions, and should throw an exception as soon as
* we exceed maxLength.
*/
public HAProxyMessageDecoder() {
v2MaxHeaderSize = V2_MAX_LENGTH;
this(true);
}
/**
* Creates a new decoder with restricted additional data (TLV) size
* Creates a new decoder with no additional data (TLV) restrictions, whether or not to throw an exception as soon
* as we exceed maxLength.
*
* @param failFast Whether or not to throw an exception as soon as we exceed maxLength
*/
public HAProxyMessageDecoder(boolean failFast) {
v2MaxHeaderSize = V2_MAX_LENGTH;
this.failFast = failFast;
}
/**
* Creates a new decoder with restricted additional data (TLV) size, and should throw an exception as soon as
* we exceed maxLength.
* <p>
* <b>Note:</b> limiting TLV size only affects processing of v2, binary headers. Also, as allowed by the 1.5 spec
* TLV data is currently ignored. For maximum performance it would be best to configure your upstream proxy host to
@ -142,6 +159,17 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
* @param maxTlvSize maximum number of bytes allowed for additional data (Type-Length-Value vectors) in a v2 header
*/
public HAProxyMessageDecoder(int maxTlvSize) {
this(maxTlvSize, true);
}
/**
* Creates a new decoder with restricted additional data (TLV) size, whether or not to throw an exception as soon
* as we exceed maxLength.
*
* @param maxTlvSize maximum number of bytes allowed for additional data (Type-Length-Value vectors) in a v2 header
* @param failFast Whether or not to throw an exception as soon as we exceed maxLength
*/
public HAProxyMessageDecoder(int maxTlvSize, boolean failFast) {
if (maxTlvSize < 1) {
v2MaxHeaderSize = V2_MIN_LENGTH;
} else if (maxTlvSize > V2_MAX_TLV) {
@ -154,6 +182,7 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
v2MaxHeaderSize = calcMax;
}
}
this.failFast = failFast;
}
/**
@ -259,7 +288,6 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
/**
* Create a frame out of the {@link ByteBuf} and return it.
* Based on code from {@link LineBasedFrameDecoder#decode(ChannelHandlerContext, ByteBuf)}.
*
* @param ctx the {@link ChannelHandlerContext} which this {@link HAProxyMessageDecoder} belongs to
* @param buffer the {@link ByteBuf} from which to read data
@ -267,42 +295,14 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
* be created
*/
private ByteBuf decodeStruct(ChannelHandlerContext ctx, ByteBuf buffer) throws Exception {
final int eoh = findEndOfHeader(buffer);
if (!discarding) {
if (eoh >= 0) {
final int length = eoh - buffer.readerIndex();
if (length > v2MaxHeaderSize) {
buffer.readerIndex(eoh);
failOverLimit(ctx, length);
return null;
}
return buffer.readSlice(length);
} else {
final int length = buffer.readableBytes();
if (length > v2MaxHeaderSize) {
discardedBytes = length;
buffer.skipBytes(length);
discarding = true;
failOverLimit(ctx, "over " + discardedBytes);
}
return null;
}
} else {
if (eoh >= 0) {
buffer.readerIndex(eoh);
discardedBytes = 0;
discarding = false;
} else {
discardedBytes = buffer.readableBytes();
buffer.skipBytes(discardedBytes);
}
return null;
if (headerExtractor == null) {
headerExtractor = new StructHeaderExtractor(v2MaxHeaderSize);
}
return headerExtractor.extract(ctx, buffer);
}
/**
* Create a frame out of the {@link ByteBuf} and return it.
* Based on code from {@link LineBasedFrameDecoder#decode(ChannelHandlerContext, ByteBuf)}.
*
* @param ctx the {@link ChannelHandlerContext} which this {@link HAProxyMessageDecoder} belongs to
* @param buffer the {@link ByteBuf} from which to read data
@ -310,40 +310,10 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
* be created
*/
private ByteBuf decodeLine(ChannelHandlerContext ctx, ByteBuf buffer) throws Exception {
final int eol = findEndOfLine(buffer);
if (!discarding) {
if (eol >= 0) {
final int length = eol - buffer.readerIndex();
if (length > V1_MAX_LENGTH) {
buffer.readerIndex(eol + DELIMITER_LENGTH);
failOverLimit(ctx, length);
return null;
}
ByteBuf frame = buffer.readSlice(length);
buffer.skipBytes(DELIMITER_LENGTH);
return frame;
} else {
final int length = buffer.readableBytes();
if (length > V1_MAX_LENGTH) {
discardedBytes = length;
buffer.skipBytes(length);
discarding = true;
failOverLimit(ctx, "over " + discardedBytes);
}
return null;
}
} else {
if (eol >= 0) {
final int delimLength = buffer.getByte(eol) == '\r' ? 2 : 1;
buffer.readerIndex(eol + delimLength);
discardedBytes = 0;
discarding = false;
} else {
discardedBytes = buffer.readableBytes();
buffer.skipBytes(discardedBytes);
}
return null;
if (headerExtractor == null) {
headerExtractor = new LineHeaderExtractor(V1_MAX_LENGTH);
}
return headerExtractor.extract(ctx, buffer);
}
private void failOverLimit(final ChannelHandlerContext ctx, int length) {
@ -399,4 +369,119 @@ public class HAProxyMessageDecoder extends ByteToMessageDecoder {
}
return true;
}
/**
* HeaderExtractor create a header frame out of the {@link ByteBuf}.
*/
private abstract class HeaderExtractor {
/** Header max size */
private final int maxHeaderSize;
protected HeaderExtractor(int maxHeaderSize) {
this.maxHeaderSize = maxHeaderSize;
}
/**
* Create a frame out of the {@link ByteBuf} and return it.
*
* @param ctx the {@link ChannelHandlerContext} which this {@link HAProxyMessageDecoder} belongs to
* @param buffer the {@link ByteBuf} from which to read data
* @return frame the {@link ByteBuf} which represent the frame or {@code null} if no frame could
* be created
* @throws Exception if exceed maxLength
*/
public ByteBuf extract(ChannelHandlerContext ctx, ByteBuf buffer) throws Exception {
final int eoh = findEndOfHeader(buffer);
if (!discarding) {
if (eoh >= 0) {
final int length = eoh - buffer.readerIndex();
if (length > maxHeaderSize) {
buffer.readerIndex(eoh + delimiterLength(buffer, eoh));
failOverLimit(ctx, length);
return null;
}
ByteBuf frame = buffer.readSlice(length);
buffer.skipBytes(delimiterLength(buffer, eoh));
return frame;
} else {
final int length = buffer.readableBytes();
if (length > maxHeaderSize) {
discardedBytes = length;
buffer.skipBytes(length);
discarding = true;
if (failFast) {
failOverLimit(ctx, "over " + discardedBytes);
}
}
return null;
}
} else {
if (eoh >= 0) {
final int length = discardedBytes + eoh - buffer.readerIndex();
buffer.readerIndex(eoh + delimiterLength(buffer, eoh));
discardedBytes = 0;
discarding = false;
if (!failFast) {
failOverLimit(ctx, "over " + length);
}
} else {
discardedBytes += buffer.readableBytes();
buffer.skipBytes(buffer.readableBytes());
}
return null;
}
}
/**
* Find the end of the header from the given {@link ByteBuf}the end may be a CRLF, or the length given by the
* header.
*
* @param buffer the buffer to be searched
* @return {@code -1} if can not find the end, otherwise return the buffer index of end
*/
protected abstract int findEndOfHeader(ByteBuf buffer);
/**
* Get the length of the header delimiter.
*
* @param buffer the buffer where delimiter is located
* @param eoh index of delimiter
* @return length of the delimiter
*/
protected abstract int delimiterLength(ByteBuf buffer, int eoh);
}
private final class LineHeaderExtractor extends HeaderExtractor {
LineHeaderExtractor(int maxHeaderSize) {
super(maxHeaderSize);
}
@Override
protected int findEndOfHeader(ByteBuf buffer) {
return findEndOfLine(buffer);
}
@Override
protected int delimiterLength(ByteBuf buffer, int eoh) {
return buffer.getByte(eoh) == '\r' ? 2 : 1;
}
}
private final class StructHeaderExtractor extends HeaderExtractor {
StructHeaderExtractor(int maxHeaderSize) {
super(maxHeaderSize);
}
@Override
protected int findEndOfHeader(ByteBuf buffer) {
return HAProxyMessageDecoder.findEndOfHeader(buffer);
}
@Override
protected int delimiterLength(ByteBuf buffer, int eoh) {
return 0;
}
}
}

View File

@ -24,7 +24,9 @@ import io.netty.handler.codec.haproxy.HAProxyProxiedProtocol.AddressFamily;
import io.netty.handler.codec.haproxy.HAProxyProxiedProtocol.TransportProtocol;
import io.netty.util.CharsetUtil;
import org.junit.Before;
import org.junit.Rule;
import org.junit.Test;
import org.junit.rules.ExpectedException;
import java.util.List;
@ -32,6 +34,8 @@ import static io.netty.buffer.Unpooled.*;
import static org.junit.Assert.*;
public class HAProxyMessageDecoderTest {
@Rule
public ExpectedException exceptionRule = ExpectedException.none();
private EmbeddedChannel ch;
@ -58,6 +62,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(443, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -78,6 +83,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(443, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -98,6 +104,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(0, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test(expected = HAProxyProtocolException.class)
@ -161,6 +168,43 @@ public class HAProxyMessageDecoderTest {
ch.writeInbound(copiedBuffer(header, CharsetUtil.US_ASCII));
}
@Test
public void testFailSlowHeaderTooLong() {
EmbeddedChannel slowFailCh = new EmbeddedChannel(new HAProxyMessageDecoder(false));
try {
String headerPart1 = "PROXY TCP4 192.168.0.1 192.168.0.11 56324 " +
"000000000000000000000000000000000000000000000000000000000000000000000443";
// Should not throw exception
assertFalse(slowFailCh.writeInbound(copiedBuffer(headerPart1, CharsetUtil.US_ASCII)));
String headerPart2 = "more header data";
// Should not throw exception
assertFalse(slowFailCh.writeInbound(copiedBuffer(headerPart2, CharsetUtil.US_ASCII)));
String headerPart3 = "end of header\r\n";
int discarded = headerPart1.length() + headerPart2.length() + headerPart3.length() - 2;
// Should throw exception
exceptionRule.expect(HAProxyProtocolException.class);
exceptionRule.expectMessage("over " + discarded);
assertFalse(slowFailCh.writeInbound(copiedBuffer(headerPart3, CharsetUtil.US_ASCII)));
} finally {
assertFalse(slowFailCh.finishAndReleaseAll());
}
}
@Test
public void testFailFastHeaderTooLong() {
EmbeddedChannel fastFailCh = new EmbeddedChannel(new HAProxyMessageDecoder(true));
try {
String headerPart1 = "PROXY TCP4 192.168.0.1 192.168.0.11 56324 " +
"000000000000000000000000000000000000000000000000000000000000000000000443";
exceptionRule.expect(HAProxyProtocolException.class); // Should throw exception, fail fast
exceptionRule.expectMessage("over " + headerPart1.length());
assertFalse(fastFailCh.writeInbound(copiedBuffer(headerPart1, CharsetUtil.US_ASCII)));
} finally {
assertFalse(fastFailCh.finishAndReleaseAll());
}
}
@Test
public void testIncompleteHeader() {
String header = "PROXY TCP4 192.168.0.1 192.168.0.11 56324";
@ -264,6 +308,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(443, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -319,6 +364,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(443, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -398,6 +444,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(443, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -476,6 +523,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(0, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -531,6 +579,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(0, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -586,6 +635,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(0, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test
@ -642,9 +692,7 @@ public class HAProxyMessageDecoderTest {
assertTrue(0 < firstTlv.refCnt());
assertTrue(0 < secondTlv.refCnt());
assertTrue(0 < thirdTLV.refCnt());
assertFalse(thirdTLV.release());
assertFalse(secondTlv.release());
assertTrue(firstTlv.release());
assertTrue(msg.release());
assertEquals(0, firstTlv.refCnt());
assertEquals(0, secondTlv.refCnt());
assertEquals(0, thirdTLV.refCnt());
@ -653,6 +701,51 @@ public class HAProxyMessageDecoderTest {
assertFalse(ch.finish());
}
@Test
public void testReleaseHAProxyMessage() {
ch = new EmbeddedChannel(new HAProxyMessageDecoder());
final byte[] bytes = {
13, 10, 13, 10, 0, 13, 10, 81, 85, 73, 84, 10, 33, 17, 0, 35, 127, 0, 0, 1, 127, 0, 0, 1,
-55, -90, 7, 89, 32, 0, 20, 5, 0, 0, 0, 0, 33, 0, 5, 84, 76, 83, 118, 49, 34, 0, 4, 76, 69, 65, 70
};
int startChannels = ch.pipeline().names().size();
assertTrue(ch.writeInbound(copiedBuffer(bytes)));
Object msgObj = ch.readInbound();
assertEquals(startChannels - 1, ch.pipeline().names().size());
HAProxyMessage msg = (HAProxyMessage) msgObj;
final List<HAProxyTLV> tlvs = msg.tlvs();
assertEquals(3, tlvs.size());
assertEquals(1, msg.refCnt());
for (HAProxyTLV tlv : tlvs) {
assertEquals(3, tlv.refCnt());
}
// Retain the haproxy message
msg.retain();
assertEquals(2, msg.refCnt());
for (HAProxyTLV tlv : tlvs) {
assertEquals(3, tlv.refCnt());
}
// Decrease the haproxy message refCnt
msg.release();
assertEquals(1, msg.refCnt());
for (HAProxyTLV tlv : tlvs) {
assertEquals(3, tlv.refCnt());
}
// Release haproxy message, TLVs will be released with it
msg.release();
assertEquals(0, msg.refCnt());
for (HAProxyTLV tlv : tlvs) {
assertEquals(0, tlv.refCnt());
}
}
@Test
public void testV2WithTLV() {
ch = new EmbeddedChannel(new HAProxyMessageDecoder(4));
@ -738,6 +831,7 @@ public class HAProxyMessageDecoderTest {
assertEquals(0, msg.destinationPort());
assertNull(ch.readInbound());
assertFalse(ch.finish());
assertTrue(msg.release());
}
@Test(expected = HAProxyProtocolException.class)

View File

@ -20,7 +20,7 @@
<parent>
<groupId>io.netty</groupId>
<artifactId>netty-parent</artifactId>
<version>4.1.32.Final-SNAPSHOT</version>
<version>4.1.39.Final-SNAPSHOT</version>
</parent>
<artifactId>netty-codec-http</artifactId>
@ -33,6 +33,21 @@
</properties>
<dependencies>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-common</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-buffer</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-transport</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>${project.groupId}</groupId>
<artifactId>netty-codec</artifactId>
@ -42,7 +57,6 @@
<groupId>${project.groupId}</groupId>
<artifactId>netty-handler</artifactId>
<version>${project.version}</version>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.jcraft</groupId>

View File

@ -26,6 +26,7 @@ import java.util.Iterator;
import java.util.List;
import java.util.Map;
import static io.netty.handler.codec.http.HttpHeaderNames.SET_COOKIE;
import static io.netty.util.AsciiString.CASE_INSENSITIVE_HASHER;
import static io.netty.util.internal.StringUtil.COMMA;
import static io.netty.util.internal.StringUtil.unescapeCsvFields;
@ -78,7 +79,7 @@ public class CombinedHttpHeaders extends DefaultHttpHeaders {
return charSequenceEscaper;
}
public CombinedHttpHeadersImpl(HashingStrategy<CharSequence> nameHashingStrategy,
CombinedHttpHeadersImpl(HashingStrategy<CharSequence> nameHashingStrategy,
ValueConverter<CharSequence> valueConverter,
io.netty.handler.codec.DefaultHeaders.NameValidator<CharSequence> nameValidator) {
super(nameHashingStrategy, valueConverter, nameValidator);
@ -87,7 +88,7 @@ public class CombinedHttpHeaders extends DefaultHttpHeaders {
@Override
public Iterator<CharSequence> valueIterator(CharSequence name) {
Iterator<CharSequence> itr = super.valueIterator(name);
if (!itr.hasNext()) {
if (!itr.hasNext() || cannotBeCombined(name)) {
return itr;
}
Iterator<CharSequence> unescapedItr = unescapeCsvFields(itr.next()).iterator();
@ -100,7 +101,7 @@ public class CombinedHttpHeaders extends DefaultHttpHeaders {
@Override
public List<CharSequence> getAll(CharSequence name) {
List<CharSequence> values = super.getAll(name);
if (values.isEmpty()) {
if (values.isEmpty() || cannotBeCombined(name)) {
return values;
}
if (values.size() != 1) {
@ -213,9 +214,13 @@ public class CombinedHttpHeaders extends DefaultHttpHeaders {
return this;
}
private static boolean cannotBeCombined(CharSequence name) {
return SET_COOKIE.contentEqualsIgnoreCase(name);
}
private CombinedHttpHeadersImpl addEscapedValue(CharSequence name, CharSequence escapedValue) {
CharSequence currentValue = super.get(name);
if (currentValue == null) {
if (currentValue == null || cannotBeCombined(name)) {
super.add(name, escapedValue);
} else {
super.set(name, commaSeparateEscapedValues(currentValue, escapedValue));

View File

@ -28,6 +28,11 @@ final class ComposedLastHttpContent implements LastHttpContent {
this.trailingHeaders = trailingHeaders;
}
ComposedLastHttpContent(HttpHeaders trailingHeaders, DecoderResult result) {
this(trailingHeaders);
this.result = result;
}
@Override
public HttpHeaders trailingHeaders() {
return trailingHeaders;

View File

@ -372,8 +372,7 @@ public class DefaultHttpHeaders extends HttpHeaders {
default:
// Check to see if the character is not an ASCII character, or invalid
if (value < 0) {
throw new IllegalArgumentException("a header name cannot contain non-ASCII character: " +
value);
throw new IllegalArgumentException("a header name cannot contain non-ASCII character: " + value);
}
}
}

View File

@ -19,6 +19,7 @@ import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.handler.codec.CodecException;
import io.netty.handler.codec.DecoderResult;
import io.netty.handler.codec.MessageToMessageDecoder;
import io.netty.util.ReferenceCountUtil;
@ -50,102 +51,107 @@ public abstract class HttpContentDecoder extends MessageToMessageDecoder<HttpObj
protected ChannelHandlerContext ctx;
private EmbeddedChannel decoder;
private boolean continueResponse;
private boolean needRead = true;
@Override
protected void decode(ChannelHandlerContext ctx, HttpObject msg, List<Object> out) throws Exception {
if (msg instanceof HttpResponse && ((HttpResponse) msg).status().code() == 100) {
try {
if (msg instanceof HttpResponse && ((HttpResponse) msg).status().code() == 100) {
if (!(msg instanceof LastHttpContent)) {
continueResponse = true;
}
// 100-continue response must be passed through.
out.add(ReferenceCountUtil.retain(msg));
return;
}
if (continueResponse) {
if (msg instanceof LastHttpContent) {
continueResponse = false;
}
// 100-continue response must be passed through.
out.add(ReferenceCountUtil.retain(msg));
return;
}
if (msg instanceof HttpMessage) {
cleanup();
final HttpMessage message = (HttpMessage) msg;
final HttpHeaders headers = message.headers();
// Determine the content encoding.
String contentEncoding = headers.get(HttpHeaderNames.CONTENT_ENCODING);
if (contentEncoding != null) {
contentEncoding = contentEncoding.trim();
} else {
contentEncoding = IDENTITY;
}
decoder = newContentDecoder(contentEncoding);
if (decoder == null) {
if (message instanceof HttpContent) {
((HttpContent) message).retain();
if (!(msg instanceof LastHttpContent)) {
continueResponse = true;
}
out.add(message);
// 100-continue response must be passed through.
out.add(ReferenceCountUtil.retain(msg));
return;
}
// Remove content-length header:
// the correct value can be set only after all chunks are processed/decoded.
// If buffering is not an issue, add HttpObjectAggregator down the chain, it will set the header.
// Otherwise, rely on LastHttpContent message.
if (headers.contains(HttpHeaderNames.CONTENT_LENGTH)) {
headers.remove(HttpHeaderNames.CONTENT_LENGTH);
headers.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
}
// Either it is already chunked or EOF terminated.
// See https://github.com/netty/netty/issues/5892
// set new content encoding,
CharSequence targetContentEncoding = getTargetContentEncoding(contentEncoding);
if (HttpHeaderValues.IDENTITY.contentEquals(targetContentEncoding)) {
// Do NOT set the 'Content-Encoding' header if the target encoding is 'identity'
// as per: http://tools.ietf.org/html/rfc2616#section-14.11
headers.remove(HttpHeaderNames.CONTENT_ENCODING);
} else {
headers.set(HttpHeaderNames.CONTENT_ENCODING, targetContentEncoding);
}
if (message instanceof HttpContent) {
// If message is a full request or response object (headers + data), don't copy data part into out.
// Output headers only; data part will be decoded below.
// Note: "copy" object must not be an instance of LastHttpContent class,
// as this would (erroneously) indicate the end of the HttpMessage to other handlers.
HttpMessage copy;
if (message instanceof HttpRequest) {
HttpRequest r = (HttpRequest) message; // HttpRequest or FullHttpRequest
copy = new DefaultHttpRequest(r.protocolVersion(), r.method(), r.uri());
} else if (message instanceof HttpResponse) {
HttpResponse r = (HttpResponse) message; // HttpResponse or FullHttpResponse
copy = new DefaultHttpResponse(r.protocolVersion(), r.status());
} else {
throw new CodecException("Object of class " + message.getClass().getName() +
" is not a HttpRequest or HttpResponse");
if (continueResponse) {
if (msg instanceof LastHttpContent) {
continueResponse = false;
}
copy.headers().set(message.headers());
copy.setDecoderResult(message.decoderResult());
out.add(copy);
} else {
out.add(message);
// 100-continue response must be passed through.
out.add(ReferenceCountUtil.retain(msg));
return;
}
}
if (msg instanceof HttpContent) {
final HttpContent c = (HttpContent) msg;
if (decoder == null) {
out.add(c.retain());
} else {
decodeContent(c, out);
if (msg instanceof HttpMessage) {
cleanup();
final HttpMessage message = (HttpMessage) msg;
final HttpHeaders headers = message.headers();
// Determine the content encoding.
String contentEncoding = headers.get(HttpHeaderNames.CONTENT_ENCODING);
if (contentEncoding != null) {
contentEncoding = contentEncoding.trim();
} else {
contentEncoding = IDENTITY;
}
decoder = newContentDecoder(contentEncoding);
if (decoder == null) {
if (message instanceof HttpContent) {
((HttpContent) message).retain();
}
out.add(message);
return;
}
// Remove content-length header:
// the correct value can be set only after all chunks are processed/decoded.
// If buffering is not an issue, add HttpObjectAggregator down the chain, it will set the header.
// Otherwise, rely on LastHttpContent message.
if (headers.contains(HttpHeaderNames.CONTENT_LENGTH)) {
headers.remove(HttpHeaderNames.CONTENT_LENGTH);
headers.set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED);
}
// Either it is already chunked or EOF terminated.
// See https://github.com/netty/netty/issues/5892
// set new content encoding,
CharSequence targetContentEncoding = getTargetContentEncoding(contentEncoding);
if (HttpHeaderValues.IDENTITY.contentEquals(targetContentEncoding)) {
// Do NOT set the 'Content-Encoding' header if the target encoding is 'identity'
// as per: http://tools.ietf.org/html/rfc2616#section-14.11
headers.remove(HttpHeaderNames.CONTENT_ENCODING);
} else {
headers.set(HttpHeaderNames.CONTENT_ENCODING, targetContentEncoding);
}
if (message instanceof HttpContent) {
// If message is a full request or response object (headers + data), don't copy data part into out.
// Output headers only; data part will be decoded below.
// Note: "copy" object must not be an instance of LastHttpContent class,
// as this would (erroneously) indicate the end of the HttpMessage to other handlers.
HttpMessage copy;
if (message instanceof HttpRequest) {
HttpRequest r = (HttpRequest) message; // HttpRequest or FullHttpRequest
copy = new DefaultHttpRequest(r.protocolVersion(), r.method(), r.uri());
} else if (message instanceof HttpResponse) {
HttpResponse r = (HttpResponse) message; // HttpResponse or FullHttpResponse
copy = new DefaultHttpResponse(r.protocolVersion(), r.status());
} else {
throw new CodecException("Object of class " + message.getClass().getName() +
" is not a HttpRequest or HttpResponse");
}
copy.headers().set(message.headers());
copy.setDecoderResult(message.decoderResult());
out.add(copy);
} else {
out.add(message);
}
}
if (msg instanceof HttpContent) {
final HttpContent c = (HttpContent) msg;
if (decoder == null) {
out.add(c.retain());
} else {
decodeContent(c, out);
}
}
} finally {
needRead = out.isEmpty();
}
}
@ -164,7 +170,21 @@ public abstract class HttpContentDecoder extends MessageToMessageDecoder<HttpObj
if (headers.isEmpty()) {
out.add(LastHttpContent.EMPTY_LAST_CONTENT);
} else {
out.add(new ComposedLastHttpContent(headers));
out.add(new ComposedLastHttpContent(headers, DecoderResult.SUCCESS));
}
}
}
@Override
public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
boolean needRead = this.needRead;
this.needRead = true;
try {
ctx.fireChannelReadComplete();
} finally {
if (needRead && !ctx.channel().config().isAutoRead()) {
ctx.read();
}
}
}

View File

@ -19,6 +19,7 @@ import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufHolder;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.embedded.EmbeddedChannel;
import io.netty.handler.codec.DecoderResult;
import io.netty.handler.codec.MessageToMessageCodec;
import io.netty.util.ReferenceCountUtil;
@ -77,10 +78,10 @@ public abstract class HttpContentEncoder extends MessageToMessageCodec<HttpReque
acceptedEncoding = HttpContentDecoder.IDENTITY;
}
HttpMethod meth = msg.method();
if (meth == HttpMethod.HEAD) {
HttpMethod method = msg.method();
if (HttpMethod.HEAD.equals(method)) {
acceptedEncoding = ZERO_LENGTH_HEAD;
} else if (meth == HttpMethod.CONNECT) {
} else if (HttpMethod.CONNECT.equals(method)) {
acceptedEncoding = ZERO_LENGTH_CONNECT;
}
@ -264,7 +265,7 @@ public abstract class HttpContentEncoder extends MessageToMessageCodec<HttpReque
if (headers.isEmpty()) {
out.add(LastHttpContent.EMPTY_LAST_CONTENT);
} else {
out.add(new ComposedLastHttpContent(headers));
out.add(new ComposedLastHttpContent(headers, DecoderResult.SUCCESS));
}
return true;
}

View File

@ -1695,7 +1695,7 @@ public abstract class HttpHeaders implements Iterable<Map.Entry<String, String>>
}
/**
* Returns a deap copy of the passed in {@link HttpHeaders}.
* Returns a deep copy of the passed in {@link HttpHeaders}.
*/
public HttpHeaders copy() {
return new DefaultHttpHeaders().set(this);

View File

@ -156,6 +156,9 @@ public class HttpMethod implements Comparable<HttpMethod> {
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (!(o instanceof HttpMethod)) {
return false;
}
@ -171,6 +174,9 @@ public class HttpMethod implements Comparable<HttpMethod> {
@Override
public int compareTo(HttpMethod o) {
if (o == this) {
return 0;
}
return name().compareTo(o.name());
}

View File

@ -271,13 +271,6 @@ public class HttpObjectAggregator
}
});
}
// If an oversized request was handled properly and the connection is still alive
// (i.e. rejected 100-continue). the decoder should prepare to handle a new message.
HttpObjectDecoder decoder = ctx.pipeline().get(HttpObjectDecoder.class);
if (decoder != null) {
decoder.reset();
}
} else if (oversized instanceof HttpResponse) {
ctx.close();
throw new TooLongFrameException("Response entity too large: " + oversized);

View File

@ -15,6 +15,8 @@
*/
package io.netty.handler.codec.http;
import static io.netty.util.internal.ObjectUtil.checkPositive;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelHandlerContext;
@ -168,21 +170,10 @@ public abstract class HttpObjectDecoder extends ByteToMessageDecoder {
protected HttpObjectDecoder(
int maxInitialLineLength, int maxHeaderSize, int maxChunkSize,
boolean chunkedSupported, boolean validateHeaders, int initialBufferSize) {
if (maxInitialLineLength <= 0) {
throw new IllegalArgumentException(
"maxInitialLineLength must be a positive integer: " +
maxInitialLineLength);
}
if (maxHeaderSize <= 0) {
throw new IllegalArgumentException(
"maxHeaderSize must be a positive integer: " +
maxHeaderSize);
}
if (maxChunkSize <= 0) {
throw new IllegalArgumentException(
"maxChunkSize must be a positive integer: " +
maxChunkSize);
}
checkPositive(maxInitialLineLength, "maxInitialLineLength");
checkPositive(maxHeaderSize, "maxHeaderSize");
checkPositive(maxChunkSize, "maxChunkSize");
AppendableCharSequence seq = new AppendableCharSequence(initialBufferSize);
lineParser = new LineParser(seq, maxInitialLineLength);
headerParser = new HeaderParser(seq, maxHeaderSize);
@ -640,49 +631,50 @@ public abstract class HttpObjectDecoder extends ByteToMessageDecoder {
if (line == null) {
return null;
}
CharSequence lastHeader = null;
if (line.length() > 0) {
LastHttpContent trailer = this.trailer;
if (trailer == null) {
trailer = this.trailer = new DefaultLastHttpContent(Unpooled.EMPTY_BUFFER, validateHeaders);
}
do {
char firstChar = line.charAt(0);
if (lastHeader != null && (firstChar == ' ' || firstChar == '\t')) {
List<String> current = trailer.trailingHeaders().getAll(lastHeader);
if (!current.isEmpty()) {
int lastPos = current.size() - 1;
//please do not make one line from below code
//as it breaks +XX:OptimizeStringConcat optimization
String lineTrimmed = line.toString().trim();
String currentLastPos = current.get(lastPos);
current.set(lastPos, currentLastPos + lineTrimmed);
}
} else {
splitHeader(line);
CharSequence headerName = name;
if (!HttpHeaderNames.CONTENT_LENGTH.contentEqualsIgnoreCase(headerName) &&
!HttpHeaderNames.TRANSFER_ENCODING.contentEqualsIgnoreCase(headerName) &&
!HttpHeaderNames.TRAILER.contentEqualsIgnoreCase(headerName)) {
trailer.trailingHeaders().add(headerName, value);
}
lastHeader = name;
// reset name and value fields
name = null;
value = null;
}
line = headerParser.parse(buffer);
if (line == null) {
return null;
}
} while (line.length() > 0);
this.trailer = null;
return trailer;
LastHttpContent trailer = this.trailer;
if (line.length() == 0 && trailer == null) {
// We have received the empty line which signals the trailer is complete and did not parse any trailers
// before. Just return an empty last content to reduce allocations.
return LastHttpContent.EMPTY_LAST_CONTENT;
}
return LastHttpContent.EMPTY_LAST_CONTENT;
CharSequence lastHeader = null;
if (trailer == null) {
trailer = this.trailer = new DefaultLastHttpContent(Unpooled.EMPTY_BUFFER, validateHeaders);
}
while (line.length() > 0) {
char firstChar = line.charAt(0);
if (lastHeader != null && (firstChar == ' ' || firstChar == '\t')) {
List<String> current = trailer.trailingHeaders().getAll(lastHeader);
if (!current.isEmpty()) {
int lastPos = current.size() - 1;
//please do not make one line from below code
//as it breaks +XX:OptimizeStringConcat optimization
String lineTrimmed = line.toString().trim();
String currentLastPos = current.get(lastPos);
current.set(lastPos, currentLastPos + lineTrimmed);
}
} else {
splitHeader(line);
CharSequence headerName = name;
if (!HttpHeaderNames.CONTENT_LENGTH.contentEqualsIgnoreCase(headerName) &&
!HttpHeaderNames.TRANSFER_ENCODING.contentEqualsIgnoreCase(headerName) &&
!HttpHeaderNames.TRAILER.contentEqualsIgnoreCase(headerName)) {
trailer.trailingHeaders().add(headerName, value);
}
lastHeader = name;
// reset name and value fields
name = null;
value = null;
}
line = headerParser.parse(buffer);
if (line == null) {
return null;
}
}
this.trailer = null;
return trailer;
}
protected abstract boolean isDecodingRequest();

View File

@ -22,6 +22,7 @@ import io.netty.util.CharsetUtil;
import static io.netty.handler.codec.http.HttpConstants.SP;
import static io.netty.util.ByteProcessor.FIND_ASCII_SPACE;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import static java.lang.Integer.parseInt;
/**
@ -538,10 +539,7 @@ public class HttpResponseStatus implements Comparable<HttpResponseStatus> {
}
private HttpResponseStatus(int code, String reasonPhrase, boolean bytes) {
if (code < 0) {
throw new IllegalArgumentException(
"code: " + code + " (expected: 0+)");
}
checkPositiveOrZero(code, "code");
if (reasonPhrase == null) {
throw new NullPointerException("reasonPhrase");

View File

@ -81,16 +81,18 @@ public final class HttpServerCodec extends CombinedChannelDuplexHandler<HttpRequ
}
private final class HttpServerRequestDecoder extends HttpRequestDecoder {
public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) {
HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize) {
super(maxInitialLineLength, maxHeaderSize, maxChunkSize);
}
public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize,
HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize,
boolean validateHeaders) {
super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders);
}
public HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize,
HttpServerRequestDecoder(int maxInitialLineLength, int maxHeaderSize, int maxChunkSize,
boolean validateHeaders, int initialBufferSize) {
super(maxInitialLineLength, maxHeaderSize, maxChunkSize, validateHeaders, initialBufferSize);
}
@ -115,7 +117,8 @@ public final class HttpServerCodec extends CombinedChannelDuplexHandler<HttpRequ
@Override
protected void sanitizeHeadersBeforeEncode(HttpResponse msg, boolean isAlwaysEmpty) {
if (!isAlwaysEmpty && method == HttpMethod.CONNECT && msg.status().codeClass() == HttpStatusClass.SUCCESS) {
if (!isAlwaysEmpty && HttpMethod.CONNECT.equals(method)
&& msg.status().codeClass() == HttpStatusClass.SUCCESS) {
// Stripping Transfer-Encoding:
// See https://tools.ietf.org/html/rfc7230#section-3.3.1
msg.headers().remove(HttpHeaderNames.TRANSFER_ENCODING);

View File

@ -14,9 +14,6 @@
*/
package io.netty.handler.codec.http;
import static io.netty.util.AsciiString.containsContentEqualsIgnoreCase;
import static io.netty.util.AsciiString.containsAllContentEqualsIgnoreCase;
import io.netty.buffer.Unpooled;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelFutureListener;
@ -30,7 +27,10 @@ import java.util.List;
import static io.netty.handler.codec.http.HttpResponseStatus.SWITCHING_PROTOCOLS;
import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1;
import static io.netty.util.AsciiString.containsAllContentEqualsIgnoreCase;
import static io.netty.util.AsciiString.containsContentEqualsIgnoreCase;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static io.netty.util.internal.StringUtil.COMMA;
/**
* A server-side handler that receives HTTP requests and optionally performs a protocol switch if
@ -284,16 +284,23 @@ public class HttpServerUpgradeHandler extends HttpObjectAggregator {
}
// Make sure the CONNECTION header is present.
CharSequence connectionHeader = request.headers().get(HttpHeaderNames.CONNECTION);
if (connectionHeader == null) {
List<String> connectionHeaderValues = request.headers().getAll(HttpHeaderNames.CONNECTION);
if (connectionHeaderValues == null) {
return false;
}
final StringBuilder concatenatedConnectionValue = new StringBuilder(connectionHeaderValues.size() * 10);
for (CharSequence connectionHeaderValue : connectionHeaderValues) {
concatenatedConnectionValue.append(connectionHeaderValue).append(COMMA);
}
concatenatedConnectionValue.setLength(concatenatedConnectionValue.length() - 1);
// Make sure the CONNECTION header contains UPGRADE as well as all protocol-specific headers.
Collection<CharSequence> requiredHeaders = upgradeCodec.requiredUpgradeHeaders();
List<CharSequence> values = splitHeader(connectionHeader);
List<CharSequence> values = splitHeader(concatenatedConnectionValue);
if (!containsContentEqualsIgnoreCase(values, HttpHeaderNames.UPGRADE) ||
!containsAllContentEqualsIgnoreCase(values, requiredHeaders)) {
!containsAllContentEqualsIgnoreCase(values, requiredHeaders)) {
return false;
}

View File

@ -65,16 +65,9 @@ public final class HttpUtil {
* {@link HttpVersion#isKeepAliveDefault()}.
*/
public static boolean isKeepAlive(HttpMessage message) {
CharSequence connection = message.headers().get(HttpHeaderNames.CONNECTION);
if (HttpHeaderValues.CLOSE.contentEqualsIgnoreCase(connection)) {
return false;
}
if (message.protocolVersion().isKeepAliveDefault()) {
return !HttpHeaderValues.CLOSE.contentEqualsIgnoreCase(connection);
} else {
return HttpHeaderValues.KEEP_ALIVE.contentEqualsIgnoreCase(connection);
}
return !message.headers().containsValue(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE, true) &&
(message.protocolVersion().isKeepAliveDefault() ||
message.headers().containsValue(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE, true));
}
/**
@ -251,13 +244,9 @@ public final class HttpUtil {
* present
*/
public static boolean is100ContinueExpected(HttpMessage message) {
if (!isExpectHeaderValid(message)) {
return false;
}
final String expectValue = message.headers().get(HttpHeaderNames.EXPECT);
// unquoted tokens in the expect header are case-insensitive, thus 100-continue is case insensitive
return HttpHeaderValues.CONTINUE.toString().equalsIgnoreCase(expectValue);
return isExpectHeaderValid(message)
// unquoted tokens in the expect header are case-insensitive, thus 100-continue is case insensitive
&& message.headers().contains(HttpHeaderNames.EXPECT, HttpHeaderValues.CONTINUE, true);
}
/**

View File

@ -15,6 +15,8 @@
*/
package io.netty.handler.codec.http;
import static io.netty.util.internal.ObjectUtil.checkPositiveOrZero;
import io.netty.buffer.ByteBuf;
import io.netty.util.CharsetUtil;
@ -165,12 +167,8 @@ public class HttpVersion implements Comparable<HttpVersion> {
}
}
if (majorVersion < 0) {
throw new IllegalArgumentException("negative majorVersion");
}
if (minorVersion < 0) {
throw new IllegalArgumentException("negative minorVersion");
}
checkPositiveOrZero(majorVersion, "majorVersion");
checkPositiveOrZero(minorVersion, "minorVersion");
this.protocolName = protocolName;
this.majorVersion = majorVersion;

View File

@ -54,7 +54,7 @@ import static io.netty.util.internal.StringUtil.*;
*
* <h3>HashDOS vulnerability fix</h3>
*
* As a workaround to the <a href="http://netty.io/s/hashdos">HashDOS</a> vulnerability, the decoder
* As a workaround to the <a href="https://netty.io/s/hashdos">HashDOS</a> vulnerability, the decoder
* limits the maximum number of decoded key-value parameter pairs, up to {@literal 1024} by
* default, and you can configure it when you construct the decoder by passing an additional
* integer parameter.

View File

@ -97,24 +97,24 @@ final class CookieUtil {
static void add(StringBuilder sb, String name, long val) {
sb.append(name);
sb.append((char) HttpConstants.EQUALS);
sb.append('=');
sb.append(val);
sb.append((char) HttpConstants.SEMICOLON);
sb.append((char) HttpConstants.SP);
sb.append(';');
sb.append(HttpConstants.SP_CHAR);
}
static void add(StringBuilder sb, String name, String val) {
sb.append(name);
sb.append((char) HttpConstants.EQUALS);
sb.append('=');
sb.append(val);
sb.append((char) HttpConstants.SEMICOLON);
sb.append((char) HttpConstants.SP);
sb.append(';');
sb.append(HttpConstants.SP_CHAR);
}
static void add(StringBuilder sb, String name) {
sb.append(name);
sb.append((char) HttpConstants.SEMICOLON);
sb.append((char) HttpConstants.SP);
sb.append(';');
sb.append(HttpConstants.SP_CHAR);
}
static void addQuoted(StringBuilder sb, String name, String val) {
@ -123,12 +123,12 @@ final class CookieUtil {
}
sb.append(name);
sb.append((char) HttpConstants.EQUALS);
sb.append((char) HttpConstants.DOUBLE_QUOTE);
sb.append('=');
sb.append('"');
sb.append(val);
sb.append((char) HttpConstants.DOUBLE_QUOTE);
sb.append((char) HttpConstants.SEMICOLON);
sb.append((char) HttpConstants.SP);
sb.append('"');
sb.append(';');
sb.append(HttpConstants.SP_CHAR);
}
static int firstInvalidCookieNameOctet(CharSequence cs) {

View File

@ -105,10 +105,10 @@ public final class ServerCookieEncoder extends CookieEncoder {
add(buf, CookieHeaderNames.MAX_AGE, cookie.maxAge());
Date expires = new Date(cookie.maxAge() * 1000 + System.currentTimeMillis());
buf.append(CookieHeaderNames.EXPIRES);
buf.append((char) HttpConstants.EQUALS);
buf.append('=');
DateFormatter.append(expires, buf);
buf.append((char) HttpConstants.SEMICOLON);
buf.append((char) HttpConstants.SP);
buf.append(';');
buf.append(HttpConstants.SP_CHAR);
}
if (cookie.path() != null) {

View File

@ -191,7 +191,7 @@ public class CorsHandler extends ChannelDuplexHandler {
private static boolean isPreflightRequest(final HttpRequest request) {
final HttpHeaders headers = request.headers();
return request.method().equals(OPTIONS) &&
return OPTIONS.equals(request.method()) &&
headers.contains(HttpHeaderNames.ORIGIN) &&
headers.contains(HttpHeaderNames.ACCESS_CONTROL_REQUEST_METHOD);
}
@ -228,7 +228,8 @@ public class CorsHandler extends ChannelDuplexHandler {
}
private static void forbidden(final ChannelHandlerContext ctx, final HttpRequest request) {
HttpResponse response = new DefaultFullHttpResponse(request.protocolVersion(), FORBIDDEN);
HttpResponse response = new DefaultFullHttpResponse(
request.protocolVersion(), FORBIDDEN, ctx.alloc().buffer(0));
response.headers().set(HttpHeaderNames.CONTENT_LENGTH, HttpHeaderValues.ZERO);
release(request);
respond(ctx, request, response);

View File

@ -59,7 +59,9 @@ public abstract class AbstractHttpData extends AbstractReferenceCounted implemen
}
@Override
public long getMaxSize() { return maxSize; }
public long getMaxSize() {
return maxSize;
}
@Override
public void setMaxSize(long maxSize) {

View File

@ -128,8 +128,7 @@ public abstract class AbstractMemoryHttpData extends AbstractHttpData {
}
long newsize = file.length();
if (newsize > Integer.MAX_VALUE) {
throw new IllegalArgumentException(
"File too big to be loaded in memory");
throw new IllegalArgumentException("File too big to be loaded in memory");
}
checkSize(newsize);
FileInputStream inputStream = new FileInputStream(file);

View File

@ -764,8 +764,6 @@ public class HttpPostMultipartRequestDecoder implements InterfaceHttpPostRequest
}
}
}
} else {
throw new ErrorDataDecoderException("Unknown Params: " + newline);
}
}
// Is it a FileUpload

View File

@ -638,7 +638,7 @@ public class HttpPostRequestEncoder implements ChunkedInput<HttpContent> {
replacement.append("; ")
.append(HttpHeaderValues.FILENAME)
.append("=\"")
.append(fileUpload.getFilename())
.append(currentFileUpload.getFilename())
.append('"');
}
@ -977,7 +977,11 @@ public class HttpPostRequestEncoder implements ChunkedInput<HttpContent> {
if (buffer.capacity() == 0) {
currentData = null;
if (currentBuffer == null) {
currentBuffer = delimiter;
if (delimiter == null) {
return null;
} else {
currentBuffer = delimiter;
}
} else {
if (delimiter != null) {
currentBuffer = wrappedBuffer(currentBuffer, delimiter);

View File

@ -19,7 +19,7 @@ import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
/**
* Web Socket frame containing binary data
* Web Socket frame containing binary data.
*/
public class BinaryWebSocketFrame extends WebSocketFrame {

View File

@ -1,5 +1,5 @@
/*
* Copyright 2012 The Netty Project
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
@ -21,7 +21,7 @@ import io.netty.util.CharsetUtil;
import io.netty.util.internal.StringUtil;
/**
* Web Socket Frame for closing the connection
* Web Socket Frame for closing the connection.
*/
public class CloseWebSocketFrame extends WebSocketFrame {
@ -33,7 +33,31 @@ public class CloseWebSocketFrame extends WebSocketFrame {
}
/**
* Creates a new empty close frame with closing getStatus code and reason text
* Creates a new empty close frame with closing status code and reason text
*
* @param status
* Status code as per <a href="http://tools.ietf.org/html/rfc6455#section-7.4">RFC 6455</a>. For
* example, <tt>1000</tt> indicates normal closure.
*/
public CloseWebSocketFrame(WebSocketCloseStatus status) {
this(status.code(), status.reasonText());
}
/**
* Creates a new empty close frame with closing status code and reason text
*
* @param status
* Status code as per <a href="http://tools.ietf.org/html/rfc6455#section-7.4">RFC 6455</a>. For
* example, <tt>1000</tt> indicates normal closure.
* @param reasonText
* Reason text. Set to null if no text.
*/
public CloseWebSocketFrame(WebSocketCloseStatus status, String reasonText) {
this(status.code(), reasonText);
}
/**
* Creates a new empty close frame with closing status code and reason text
*
* @param statusCode
* Integer status code as per <a href="http://tools.ietf.org/html/rfc6455#section-7.4">RFC 6455</a>. For
@ -46,12 +70,12 @@ public class CloseWebSocketFrame extends WebSocketFrame {
}
/**
* Creates a new close frame with no losing getStatus code and no reason text
* Creates a new close frame with no losing status code and no reason text
*
* @param finalFragment
* flag indicating if this frame is the final fragment
* @param rsv
* reserved bits used for protocol extensions
* reserved bits used for protocol extensions.
*/
public CloseWebSocketFrame(boolean finalFragment, int rsv) {
this(finalFragment, rsv, Unpooled.buffer(0));
@ -105,7 +129,7 @@ public class CloseWebSocketFrame extends WebSocketFrame {
/**
* Returns the closing status code as per <a href="http://tools.ietf.org/html/rfc6455#section-7.4">RFC 6455</a>. If
* a getStatus code is set, -1 is returned.
* a status code is set, -1 is returned.
*/
public int statusCode() {
ByteBuf binaryData = content();
@ -114,10 +138,7 @@ public class CloseWebSocketFrame extends WebSocketFrame {
}
binaryData.readerIndex(0);
int statusCode = binaryData.readShort();
binaryData.readerIndex(0);
return statusCode;
return binaryData.getShort(0);
}
/**

View File

@ -43,7 +43,7 @@ public class ContinuationWebSocketFrame extends WebSocketFrame {
}
/**
* Creates a new continuation frame with the specified binary data
* Creates a new continuation frame with the specified binary data.
*
* @param finalFragment
* flag indicating if this frame is the final fragment
@ -71,17 +71,17 @@ public class ContinuationWebSocketFrame extends WebSocketFrame {
}
/**
* Returns the text data in this frame
* Returns the text data in this frame.
*/
public String text() {
return content().toString(CharsetUtil.UTF_8);
}
/**
* Sets the string for this frame
* Sets the string for this frame.
*
* @param text
* text to store
* text to store.
*/
private static ByteBuf fromText(String text) {
if (text == null || text.isEmpty()) {

View File

@ -0,0 +1,64 @@
/*
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.handler.codec.http.websocketx;
import io.netty.handler.codec.CorruptedFrameException;
import io.netty.handler.codec.DecoderException;
/**
* An {@link DecoderException} which is thrown when the received {@link WebSocketFrame} data could not be decoded by
* an inbound handler.
*/
public final class CorruptedWebSocketFrameException extends CorruptedFrameException {
private static final long serialVersionUID = 3918055132492988338L;
private final WebSocketCloseStatus closeStatus;
/**
* Creates a new instance.
*/
public CorruptedWebSocketFrameException() {
this(WebSocketCloseStatus.PROTOCOL_ERROR, null, null);
}
/**
* Creates a new instance.
*/
public CorruptedWebSocketFrameException(WebSocketCloseStatus status, String message, Throwable cause) {
super(message == null ? status.reasonText() : message, cause);
closeStatus = status;
}
/**
* Creates a new instance.
*/
public CorruptedWebSocketFrameException(WebSocketCloseStatus status, String message) {
this(status, message, null);
}
/**
* Creates a new instance.
*/
public CorruptedWebSocketFrameException(WebSocketCloseStatus status, Throwable cause) {
this(status, null, cause);
}
public WebSocketCloseStatus closeStatus() {
return closeStatus;
}
}

View File

@ -19,7 +19,7 @@ import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
/**
* Web Socket frame containing binary data
* Web Socket frame containing binary data.
*/
public class PingWebSocketFrame extends WebSocketFrame {
@ -41,7 +41,7 @@ public class PingWebSocketFrame extends WebSocketFrame {
}
/**
* Creates a new ping frame with the specified binary data
* Creates a new ping frame with the specified binary data.
*
* @param finalFragment
* flag indicating if this frame is the final fragment

View File

@ -19,7 +19,7 @@ import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
/**
* Web Socket frame containing binary data
* Web Socket frame containing binary data.
*/
public class PongWebSocketFrame extends WebSocketFrame {

View File

@ -20,7 +20,7 @@ import io.netty.buffer.Unpooled;
import io.netty.util.CharsetUtil;
/**
* Web Socket text frame
* Web Socket text frame.
*/
public class TextWebSocketFrame extends WebSocketFrame {
@ -35,7 +35,7 @@ public class TextWebSocketFrame extends WebSocketFrame {
* Creates a new text frame with the specified text string. The final fragment flag is set to true.
*
* @param text
* String to put in the frame
* String to put in the frame.
*/
public TextWebSocketFrame(String text) {
super(fromText(text));
@ -59,7 +59,7 @@ public class TextWebSocketFrame extends WebSocketFrame {
* @param rsv
* reserved bits used for protocol extensions
* @param text
* String to put in the frame
* String to put in the frame.
*/
public TextWebSocketFrame(boolean finalFragment, int rsv, String text) {
super(finalFragment, rsv, fromText(text));
@ -74,7 +74,7 @@ public class TextWebSocketFrame extends WebSocketFrame {
}
/**
* Creates a new text frame with the specified binary data. The final fragment flag is set to true.
* Creates a new text frame with the specified binary data and the final fragment flag.
*
* @param finalFragment
* flag indicating if this frame is the final fragment
@ -88,7 +88,7 @@ public class TextWebSocketFrame extends WebSocketFrame {
}
/**
* Returns the text data in this frame
* Returns the text data in this frame.
*/
public String text() {
return content().toString(CharsetUtil.UTF_8);

View File

@ -47,7 +47,7 @@ public class Utf8FrameValidator extends ChannelInboundHandlerAdapter {
if ((frame instanceof TextWebSocketFrame) ||
(utf8Validator != null && utf8Validator.isChecking())) {
// Check UTF-8 correctness for this payload
checkUTF8String(ctx, frame.content());
checkUTF8String(frame.content());
// This does a second check to make sure UTF-8
// correctness for entire text message
@ -60,12 +60,12 @@ public class Utf8FrameValidator extends ChannelInboundHandlerAdapter {
if (fragmentedFramesCount == 0) {
// First text or binary frame for a fragmented set
if (frame instanceof TextWebSocketFrame) {
checkUTF8String(ctx, frame.content());
checkUTF8String(frame.content());
}
} else {
// Subsequent frames - only check if init frame is text
if (utf8Validator != null && utf8Validator.isChecking()) {
checkUTF8String(ctx, frame.content());
checkUTF8String(frame.content());
}
}
@ -77,17 +77,18 @@ public class Utf8FrameValidator extends ChannelInboundHandlerAdapter {
super.channelRead(ctx, msg);
}
private void checkUTF8String(ChannelHandlerContext ctx, ByteBuf buffer) {
try {
if (utf8Validator == null) {
utf8Validator = new Utf8Validator();
}
utf8Validator.check(buffer);
} catch (CorruptedFrameException ex) {
if (ctx.channel().isActive()) {
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
private void checkUTF8String(ByteBuf buffer) {
if (utf8Validator == null) {
utf8Validator = new Utf8Validator();
}
utf8Validator.check(buffer);
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
if (cause instanceof CorruptedFrameException && ctx.channel().isOpen()) {
ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
}
super.exceptionCaught(ctx, cause);
}
}

View File

@ -1,5 +1,5 @@
/*
* Copyright 2012 The Netty Project
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
@ -36,7 +36,6 @@
package io.netty.handler.codec.http.websocketx;
import io.netty.buffer.ByteBuf;
import io.netty.handler.codec.CorruptedFrameException;
import io.netty.util.ByteProcessor;
/**
@ -79,7 +78,8 @@ final class Utf8Validator implements ByteProcessor {
codep = 0;
if (state != UTF8_ACCEPT) {
state = UTF8_ACCEPT;
throw new CorruptedFrameException("bytes are not UTF-8");
throw new CorruptedWebSocketFrameException(
WebSocketCloseStatus.INVALID_PAYLOAD_DATA, "bytes are not UTF-8");
}
}
@ -93,7 +93,8 @@ final class Utf8Validator implements ByteProcessor {
if (state == UTF8_REJECT) {
checking = false;
throw new CorruptedFrameException("bytes are not UTF-8");
throw new CorruptedWebSocketFrameException(
WebSocketCloseStatus.INVALID_PAYLOAD_DATA, "bytes are not UTF-8");
}
return true;
}

View File

@ -1,5 +1,5 @@
/*
* Copyright 2012 The Netty Project
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
@ -19,6 +19,7 @@ import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.handler.codec.ReplayingDecoder;
import io.netty.handler.codec.TooLongFrameException;
import io.netty.util.internal.ObjectUtil;
import java.util.List;
@ -52,6 +53,17 @@ public class WebSocket00FrameDecoder extends ReplayingDecoder<Void> implements W
this.maxFrameSize = maxFrameSize;
}
/**
* Creates a new instance of {@code WebSocketFrameDecoder} with the specified {@code maxFrameSize}. If the client
* sends a frame size larger than {@code maxFrameSize}, the channel will be closed.
*
* @param decoderConfig
* Frames decoder configuration.
*/
public WebSocket00FrameDecoder(WebSocketDecoderConfig decoderConfig) {
this.maxFrameSize = ObjectUtil.checkNotNull(decoderConfig, "decoderConfig").maxFramePayloadLength();
}
@Override
protected void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception {
// Discard all data received if closing handshake was received before.
@ -96,7 +108,7 @@ public class WebSocket00FrameDecoder extends ReplayingDecoder<Void> implements W
if (type == (byte) 0xFF && frameSize == 0) {
receivedClosingHandshake = true;
return new CloseWebSocketFrame();
return new CloseWebSocketFrame(true, 0, ctx.alloc().buffer(0));
}
ByteBuf payload = readBytes(ctx.alloc(), buffer, (int) frameSize);
return new BinaryWebSocketFrame(payload);

View File

@ -1,5 +1,5 @@
/*
* Copyright 2012 The Netty Project
* Copyright 2019 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
@ -71,7 +71,11 @@ public class WebSocket07FrameDecoder extends WebSocket08FrameDecoder {
* helps check for denial of services attacks.
*/
public WebSocket07FrameDecoder(boolean expectMaskedFrames, boolean allowExtensions, int maxFramePayloadLength) {
this(expectMaskedFrames, allowExtensions, maxFramePayloadLength, false);
this(WebSocketDecoderConfig.newBuilder()
.expectMaskedFrames(expectMaskedFrames)
.allowExtensions(allowExtensions)
.maxFramePayloadLength(maxFramePayloadLength)
.build());
}
/**
@ -91,6 +95,21 @@ public class WebSocket07FrameDecoder extends WebSocket08FrameDecoder {
*/
public WebSocket07FrameDecoder(boolean expectMaskedFrames, boolean allowExtensions, int maxFramePayloadLength,
boolean allowMaskMismatch) {
super(expectMaskedFrames, allowExtensions, maxFramePayloadLength, allowMaskMismatch);
this(WebSocketDecoderConfig.newBuilder()
.expectMaskedFrames(expectMaskedFrames)
.allowExtensions(allowExtensions)
.maxFramePayloadLength(maxFramePayloadLength)
.allowMaskMismatch(allowMaskMismatch)
.build());
}
/**
* Constructor
*
* @param decoderConfig
* Frames decoder configuration.
*/
public WebSocket07FrameDecoder(WebSocketDecoderConfig decoderConfig) {
super(decoderConfig);
}
}

Some files were not shown because too many files have changed in this diff Show More