Commit Graph

8658 Commits

Author SHA1 Message Date
lutovich
25d146038c Improved error message in FixedChannelPool
Motivation:

`FixedChannelPool` allows users to configure `acquireTimeoutMillis`
and expects given value to be greater or equal to zero when timeout
action is supplied. However, validation error message said that
value is expected to be greater or equal to one. Code performs
check against zero.

Modifications:

Changed error message to say that value greater or equal to
zero is expected. Added test to check that zero is an acceptable
value.

Result:

Exception with right error message is thrown.
2017-11-13 20:29:07 +01:00
Norman Maurer
0013567cd8 Don't try to use UnixResolverDnsServerAddressStreamProvider when on Windows.
Motivation:

We should not try to use UnixResolverDnsServerAddressStreamProvider when on Windows as it will log some error that will produce noise and may confuse users.

Modifications:

Just use DefaultDnsServerAddressStreamProvider if windows is used.

Result:

Less noise in the logs. This was reported in vert.x: https://github.com/eclipse/vert.x/issues/2204
2017-11-13 20:26:38 +01:00
Anuraag Agrawal
1f1a60ae7d Use Netty's DefaultPriorityQueue instead of JDK's PriorityQueue for scheduled tasks
Motivation:

`AbstractScheduledEventExecutor` uses a standard `java.util.PriorityQueue` to keep track of task deadlines. `ScheduledFuture.cancel` removes tasks from this `PriorityQueue`. Unfortunately, `PriorityQueue.remove` has `O(n)` performance since it must search for the item in the entire queue before removing it. This is fast when the future is at the front of the queue (e.g., already triggered) but not when it's randomly located in the queue.

Many servers will use `ScheduledFuture.cancel` on all requests, e.g., to manage a request timeout. As these cancellations will be happen in arbitrary order, when there are many scheduled futures, `PriorityQueue.remove` is a bottleneck and greatly hurts performance with many concurrent requests (>10K).

Modification:

Use netty's `DefaultPriorityQueue` for scheduling futures instead of the JDK. `DefaultPriorityQueue` is almost identical to the JDK version except it is able to remove futures without searching for them in the queue. This means `DefaultPriorityQueue.remove` has `O(log n)` performance.

Result:

Before - cancelling futures has varying performance, capped at `O(n)`
After - cancelling futures has stable performance, capped at `O(log n)`

Benchmark results

After - cancelling in order and in reverse order have similar performance within `O(log n)` bounds
```
Benchmark                                           (num)   Mode  Cnt       Score      Error  Units
ScheduledFutureTaskBenchmark.cancelInOrder            100  thrpt   20  137779.616 ± 7709.751  ops/s
ScheduledFutureTaskBenchmark.cancelInOrder           1000  thrpt   20   11049.448 ±  385.832  ops/s
ScheduledFutureTaskBenchmark.cancelInOrder          10000  thrpt   20     943.294 ±   12.391  ops/s
ScheduledFutureTaskBenchmark.cancelInOrder         100000  thrpt   20      64.210 ±    1.824  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder     100  thrpt   20  167531.096 ± 9187.865  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder    1000  thrpt   20   33019.786 ± 4737.770  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder   10000  thrpt   20    2976.955 ±  248.555  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder  100000  thrpt   20     362.654 ±   45.716  ops/s
```

Before - cancelling in order and in reverse order have significantly different performance at higher queue size, orders of magnitude worse than the new implementation.
```
Benchmark                                           (num)   Mode  Cnt       Score       Error  Units
ScheduledFutureTaskBenchmark.cancelInOrder            100  thrpt   20  139968.586 ± 12951.333  ops/s
ScheduledFutureTaskBenchmark.cancelInOrder           1000  thrpt   20   12274.420 ±   337.800  ops/s
ScheduledFutureTaskBenchmark.cancelInOrder          10000  thrpt   20     958.168 ±    15.350  ops/s
ScheduledFutureTaskBenchmark.cancelInOrder         100000  thrpt   20      53.381 ±    13.981  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder     100  thrpt   20  123918.829 ±  3642.517  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder    1000  thrpt   20    5099.810 ±   206.992  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder   10000  thrpt   20      72.335 ±     0.443  ops/s
ScheduledFutureTaskBenchmark.cancelInReverseOrder  100000  thrpt   20       0.743 ±     0.003  ops/s
```
2017-11-10 23:09:32 -08:00
Norman Maurer
2adb8bd80f Use Parameterized to run SslHandler tests with different SslProviders.
Motivation:

At the moment use loops to run SslHandler tests with different SslProviders which is error-prone and also make it difficult to understand with which provider these failed.

Modifications:

- Move unit tests that should run with multiple SslProviders to extra class.
- Use junit Parameterized to run with different SslProvider combinations

Result:

Easier to understand which SslProvider produced test failures
2017-11-10 07:20:35 -08:00
Norman Maurer
cc069722a2 CompositeBytebuf.copy() and copy(...) should respect the allocator
Motivation:

When calling CompositeBytebuf.copy() and copy(...) we currently use Unpooled to allocate the buffer. This is not really correct and may produce more GC then needed. We should use the allocator that was used when creating the CompositeByteBuf to allocate the new buffer which may be for example the PooledByteBufAllocator.

Modifications:

- Use alloc() to allocate the new buffer.
- Add tests
- Fix tests that depend on the copy to be backed by an byte-array without checking hasArray() first.

Result:

Fixes [#7393].
2017-11-10 07:17:16 -08:00
Norman Maurer
188ea59c9d [maven-release-plugin] prepare for next development iteration 2017-11-08 22:36:53 +00:00
Norman Maurer
812354cf1f [maven-release-plugin] prepare release netty-4.1.17.Final 2017-11-08 22:36:33 +00:00
Norman Maurer
3554646a60 Correctly convert empty HttpContent to ByteBuf
Motivation:

93130b172a introduced a regression where we not "converted" an empty HttpContent to ByteBuf and just passed it on in the pipeline. This can lead to the situation that other handlers in the pipeline will see HttpContent instances which is not expected.

Modifications:

- Correctly convert HttpContent to ByteBuf when empty
- Add unit test.

Result:

Handlers in the pipeline will see the expected message type.
2017-11-08 13:46:32 -08:00
Ning Sun
73e8122fc1 Fix sharable check logic
Motivation:

There is an logic issue when checking if ChannelHandler is sharable.

Modification:

Corrected || to &&
2017-11-08 08:36:07 -08:00
Carl Mastrangelo
ef5ebb40c9 Keep all leak records up to the target amount
Motivation:
When looking for a leak, its nice to be able to request at least a
number of leaks.

Modification:

* Made all leak records up to the target amoutn recorded, and only
  then enable backing off.
* Enable recording more than 32 elements.  Previously the shift
  amount made this impossible.

Result:
Ability to record all accesses.
2017-11-07 19:09:14 -08:00
Scott Mitchell
7511c15187 AbstractCoalescingBufferQueue addFirst void promise handling
Motivation:
AbstractCoalescingBufferQueue#add accounts for void promises, but AbstractCoalescingBufferQueue#addFirst does not. These methods should be consistent.

Modifications:
- AbstractCoalescingBufferQueue#addFirst should account for void promises and share code with AbstractCoalescingBufferQueue#add

Result:
More correct void promise handling in AbstractCoalescingBufferQueue.
2017-11-07 11:33:53 -08:00
Scott Mitchell
8c5eeb581e SslHandler promise completion incorrect if write doesn't immediately
complete

Motivation:
SslHandler removes a Buffer/Promise pair from
AbstractCoalescingBufferQueue when wrapping data. However it is possible
the SSLEngine will not consume the entire buffer. In this case
SslHandler adds the Buffer back to the queue, but doesn't add the
Promise back to the queue. This may result in the promise completing
immediately in finishFlush, and generally not correlating to the
completion of writing the corresponding Buffer

Modifications:
- AbstractCoalescingBufferQueue#addFirst should also support adding the
ChannelPromise
- In the event of a handshake timeout we should immediately fail pending
writes immediately to get a more accurate exception

Result:
Fixes https://github.com/netty/netty/issues/7378.
2017-11-07 09:24:40 -08:00
Scott Mitchell
8618a3351c ReadOnlyHttpHeaders
Motivation:
For use cases that create headers, but do not need to modify them a read only variant of HttpHeaders would be useful and may be able to provide better iteration performance for encoding.

Modifications:
- Introduce ReadOnlyHttpHeaders that is backed by a flat array

Result:
ReadOnlyHttpHeaders exists for non-modifiable HttpHeaders use cases.
2017-11-06 21:58:16 -08:00
Norman Maurer
e7f02b1dc0 Set readPending to false when EOF is detected while issue an read
Motivation:

We need to set readPending to false when we detect a EOF while issue a read as otherwise we may not unregister from the Selector / Epoll / KQueue and so keep on receving wakeups.

The important bit is that we may even get a wakeup for a read event but will still will only be able to read 0 bytes from the socket, so we need to be very careful when we clear the readPending. This can happen because we generally using edge-triggered mode for our native transports and because of the nature of edge-triggered we may schedule an read event just to find out there is nothing left to read atm (because we completely drained the socket on the previous read).

Modifications:

Set readPending to false when EOF is detected.

Result:

Fixes [#7255].
2017-11-06 15:44:36 -08:00
Scott Mitchell
570d96d8c2 SslHandler leak
Motivation:
SslHandler only supports ByteBuf objects, but will not release objects of other types. SslHandler will also not release objects if its internal state is not correctly setup.

Modifications:
- Release non-ByteBuf objects in write
- Release all objects if the SslHandler queue is not setup

Result:
Less leaks in SslHandler.
2017-11-06 15:41:42 -08:00
Scott Mitchell
35b0cd58fb HTTP/2 write of released buffer should not write and should fail the promise
Motivation:
HTTP/2 allows writes of 0 length data frames. However in some cases EMPTY_BUFFER is used instead of the actual buffer that was written. This may mask writes of released buffers or otherwise invalid buffer objects. It is also possible that if the buffer is invalid AbstractCoalescingBufferQueue will not release the aggregated buffer nor fail the associated promise.

Modifications:
- DefaultHttp2FrameCodec should take care to fail the promise, even if releasing the data throws
- AbstractCoalescingBufferQueue should release any aggregated data and fail the associated promise if something goes wrong during aggregation

Result:
More correct handling of invalid buffers in HTTP/2 code.
2017-11-06 14:38:58 -08:00
Norman Maurer
bcad9dbf97 Revert "Set readPending to false when ever a read is done"
This reverts commit 413c7c2cd8 as it introduced an regression when edge-triggered mode is used which is true for our native transports by default. With 413c7c2cd8 included it was possible that we set readPending to false by mistake even if we would be interested in read more.
2017-11-06 09:21:42 -08:00
Norman Maurer
e0bbff74f7 Correctly handle WebSockets 00 when using HttpClientCodec.
Motivation:

7995afee8f introduced a change that broke special handling of WebSockets 00.

Modifications:

Correctly delegate to super method which has special handling for WebSockets 00.

Result:

Fixes [#7362].
2017-11-03 15:55:22 +01:00
Scott Mitchell
93130b172a HttpObjectEncoder and MessageAggregator EMPTY_BUFFER usage
Motivation:
HttpObjectEncoder and MessageAggregator treat buffers that are not readable special. If a buffer is not readable, then an EMPTY_BUFFER is written and the actual buffer is ignored. If the buffer has already been released then this will not be correct as the promise will be completed, but in reality the original content shouldn't have resulted in any write because it was invalid.

Modifications:
- HttpObjectEncoder should retain/write the original buffer instead of using EMPTY_BUFFER
- MessageAggregator should retain/write the original ByteBufHolder instead of using EMPTY_BUFFER

Result:
Invalid write operations which happen to not be readable correctly reflect failed status in the promise, and do not result in any writes to the channel.
2017-11-03 07:03:19 +01:00
Scott Mitchell
fbe0e3506e OpenSslEngine support unwrap plaintext greater than 2^14 and avoid
infinite loop

Motivation:
If SslHandler sets jdkCompatibilityMode to false and ReferenceCountedOpenSslEngine sets jdkCompatibilityMode to true there is a chance we will get stuck in an infinite loop if the peer sends a TLS packet with length greater than 2^14 (the maximum length allowed in the TLS 1.2 RFC [1]). However there are legacy implementations which actually send larger TLS payloads than 2^14 (e.g. OpenJDK's SSLSessionImpl [2]) and in this case ReferenceCountedOpenSslEngine will return BUFFER_OVERFLOW in an attempt to notify that a larger buffer is to be used, but if the buffer is already at max size this process will repeat indefinitely.

[1] https://tools.ietf.org/html/rfc5246#section-6.2.1
[2] http://hg.openjdk.java.net/jdk8u/jdk8u/jdk/file/d5a00b1e8f78/src/share/classes/sun/security/ssl/SSLSessionImpl.java#l793

Modifications:
- Support TLS payload sizes greater than 2^14 in ReferenceCountedOpenSslEngine
- ReferenceCountedOpenSslEngine should throw an exception if a
BUFFER_OVERFLOW is impossible to rectify

Result:
No more infinite loop in ReferenceCountedOpenSslEngine due to
BUFFER_OVERFLOW and large TLS payload lengths.
2017-11-02 11:42:38 -07:00
Norman Maurer
7321418eb5 Fix possible NPE in ReferenceCountedOpenSslEngine.rejectRemoteInitiatedRenegotiation()
Motivation:

ReferenceCountedOpenSslEngine.rejectRemoteInitiatedRenegotiation() is called in a finally block to ensure we always check for renegotiation. The problem here is that sometimes we will already shutdown the engine before we call the method which will lead to an NPE in this case as the ssl pointer was already destroyed.

Modifications:

Check that the engine is not destroyed yet before calling SSL.getHandshakeCount(...)

Result:

Fixes [#7353].
2017-11-02 14:28:59 +01:00
Scott Mitchell
fa584c146f SslHandler dervies jdkCompatibilityMode from SSLEngine
Motivation:
Some SSLEngine implementations (e.g. ReferenceCountedOpenSslContext) support unwrapping/wrapping multiple packets at a time. The SslHandler behaves differently if the SSLEngine supports this feature, but currently requires that the constructor argument between the SSLEngine creation and SslHandler are coordinated. This can be difficult, or require package private access, if extending the SslHandler.

Modifications:
- The SslHandler should inspect the SSLEngine to see if it supports jdkCompatibilityMode instead of relying on getting an extra constructor argument which maybe out of synch with the SSLEngine

Result:
Easier to override SslHandler and have consistent jdkCompatibilityMode between SSLEngine and SslHandler.
2017-11-01 12:00:51 -07:00
Norman Maurer
ad1f0d46b3 Take the architecture into account when loading netty-tcnative
Motivation:

We should ensure we only try to load the netty-tcnative version that was compiled for the architecture we are using.

Modifications:

Include architecture into native lib name.

Result:

Only load native lib if the architecture is supported.
2017-11-01 08:52:09 +01:00
Janusz Dziemidowicz
cdb2a27857 Add TCP_FASTOPEN_CONNECT epoll option
Motivation:

Linux kernel 4.11 introduced a new socket option,
TCP_FASTOPEN_CONNECT, that greatly simplifies making TCP Fast Open
connections on client side. Usually simply setting the flag before
connect() call is enough, no more changes are required.

Details can be found in kernel commit:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=19f6d3f3

Modifications:

TCP_FASTOPEN_CONNECT socket option was added to EpollChannelOption
class.

Result:

Netty clients can easily make TCP Fast Open connections. Simply
calling option(EpollChannelOption.TCP_FASTOPEN_CONNECT, true) in
client bootstrap is enough (given recent enough kernel).
2017-10-29 13:42:15 +01:00
Piotr Kołaczkowski
7995afee8f Don't disable HttpObjectDecoder on upgrade from HTTP/1.x to HTTP/1.x over TLS
This change allows to upgrade a plain HTTP 1.x connection to TLS
according to RFC 2817. Switching the transport layer to TLS should be
possible without removing HttpClientCodec from the pipeline,
because HTTP/1.x layer of the protocol remains untouched by the switch
and the HttpClientCodec state must be retained for proper
handling the remainder of the response message,
per RFC 2817 requirement in point 3.3:

  Once the TLS handshake completes successfully, the server MUST
  continue with the response to the original request.

After this commit, the upgrade can be established by simply
inserting an SslHandler at the front of the pipeline after receiving
101 SWITCHING PROTOCOLS response, exactly as described in SslHander
documentation.

Modifications:
- Don't set HttpObjectDecoder into UPGRADED state if
  101 SWITCHING_PROTOCOLS response contains HTTP/1.0 or HTTP/1.1 in
  the protocol stack described by the Upgrade header.
- Skip pairing comparison for 101 SWITCHING_PROTOCOLS, similar
  to 100 CONTINUE, since 101 is not the final response to the original
  request and the final response is expected after TLS handshake.

Fixes #7293.
2017-10-29 13:21:11 +01:00
Trask Stalnaker
58e74e9fee Support running Netty in bootstrap class loader
Motivation:

Fix NullPointerExceptions that occur when running netty-tcnative inside the bootstrap class loader.

Modifications:

- Replace loader.getResource(...) with ClassLoader.getSystemResource(...) when loader is null.
- Replace loader.loadClass(...) with Class.forName(..., false, loader) which works when loader is both null and non-null.

Result:

Support running native libs in bootstrap class loader
2017-10-29 13:13:19 +01:00
Scott Mitchell
911b2acc50 HTTP/2 Child Channel reading and flushing
Motivation:
If a child channel's read is triggered outside the parent channel's read
loop then it is possible a WINDOW_UPDATE will be written, but not
flushed.
If a child channel's beginRead processes data from the inboundBuffer and
then readPending is set to false, which will result in data not being
delivered if in the parent's read loop and more data is attempted to be
delievered to that child channel.

Modifications:
- The child channel must force a flush if a frame is written as a result
of reading a frame, and this is not in the parent channel's read loop
- The child channel must allow a transition from dequeueing from
beginRead into the parent channel's read loop to deliver more data

Result:
The child channel flushes data when reading outside the parent's read
loop, and has frames delivered more reliably.
2017-10-26 10:06:22 -07:00
Scott Mitchell
413c7c2cd8 Set readPending to false when ever a read is done
Motivation:
readPending is currently only set to false if data is delivered to the application, however this may result in duplicate events being received from the selector in the event that the socket was closed.

Modifications:
- We should set readPending to false before each read attempt for all
transports besides NIO.
- Based upon the Javadocs it is possible that NIO may have spurious
wakeups [1]. In this case we should be more cautious and only set
readPending to false if data was actually read.

[1] https://docs.oracle.com/javase/7/docs/api/java/nio/channels/SelectionKey.html
That a selection key's ready set indicates that its channel is ready for some operation category is a hint, but not a guarantee, that an operation in such a category may be performed by a thread without causing the thread to block.

Result:
Notification from the selector (or simulated events from kqueue/epoll ET) in the event of socket closure.
Fixes https://github.com/netty/netty/issues/7255
2017-10-25 08:25:54 -07:00
Lionel Li
424bb09d24 Http2StreamFrameToHttpObjectCodec should handle 100-Continue properly
Motivation:
Http2StreamFrameToHttpObjectCodec was not properly encoding and
decoding 100-Continue HttpResponse/Http2SettingsFrame properly. It was
encoding 100-Continue FullHttpResponse as an Http2SettingFrame with
endStream=true, causing the child channel to terminate. It was not
decoding 100-Continue Http2SettingsFrame (endStream=false) as
FullHttpResponse. This should be fixed as it would cause http2 child
stream to prematurely close, and could cause HttpObjectAggregator to
fail if it's in the pipeline.

Modification:
- Fixed encode() to properly encode 100-Continue FullHttpResponse as
  Http2SettingsFrame with endStream=false
- Reject 100-Continue HttpResponse that are NOT FullHttpResponse
- Fixed decode() to properly decode 100-Continue Http2SettingsFrame
  (endStream=false) as a FullHttpResponse
- made Http2StreamFrameToHttpObjectCodec sharable so that it can b used
  among child streams within the same Http2MultiplexCodec

Result:
Now Http2StreamFrameToHttpObjectCodec should be properly handling
100-Continue responses.
2017-10-25 06:56:02 +02:00
Dmitry Minkovsky
8aeba78ecc HttpPostMultipartRequestDecoder should decode header field parameters
Motivation:

I am receiving a multipart/form_data upload from a Mailgun webhook. This webhook used to send parts like this:

--74e78d11b0214bdcbc2f86491eeb4902
Content-Disposition: form-data; name="attachment-2"; filename="attached_�айл.txt"
Content-Type: text/plain
Content-Length: 32

This is the content of the file

--74e78d11b0214bdcbc2f86491eeb4902--
but now it posts parts like this:

--74e78d11b0214bdcbc2f86491eeb4902
Content-Disposition: form-data; name="attachment-2"; filename*=utf-8''attached_%D1%84%D0%B0%D0%B9%D0%BB.txt

This is the content of the file

--74e78d11b0214bdcbc2f86491eeb4902--
This new format uses field parameter encoding described in RFC 5987. More about this encoding can be found here.

Netty does not parse this format. The result is the filename is not decoded and the part is not parsed into a FileUpload.

Modification:

Added failing test in HttpPostRequestDecoderTest.java and updated HttpPostMultipartRequestDecoder.java
Refactored to please Netkins
Result:

Fixes:

HttpPostMultipartRequestDecoder identifies the RFC 5987 format and parses it.
Previous functionality is retained.
2017-10-24 19:30:59 +02:00
Carl Mastrangelo
e62e6df4ac Use WeakReferences for Resource Leaks
Motivation:
Phantom references are for cleaning up resources that were
forgotten, which means they keep their referent alive.   This
means garbage is kept around until the refqueue is drained, rather
than when the reference is unreachable.

Modification:
Use Weak References instead of Phantoms

Result:
More punctual leak detection.
2017-10-24 19:21:33 +02:00
Norman Maurer
92b786e2f3 Fix possible leak in SslHandler if wrap(...) throws.
Motivation:

We can end up with a buffer leak if SSLEngine.wrap(...) throws.

Modifications:

Correctly release the ByteBuf if SSLEngine.wrap(...) throws.

Result:

Fixes [#7337].
2017-10-24 18:52:20 +02:00
Ned Twigg
dcbbae7f90 Added QueryStringDecoder.rawPath() and rawQuery()
Motivation:

Before this commit, it is impossible to access the path component of the
URI before it has been decoded.  This makes it impossible to distinguish
between the following URIs:

/user/title?key=value
/user%2Ftitle?key=value

The user could already access the raw uri value, but they had to calculate
pathEndIdx themselves, even though it might already be cached inside
QueryStringDecoder.

Result:

The user can easily and efficiently access the undecoded path and query.
2017-10-24 09:32:06 +02:00
Lionel Li
baf273aea8 Trigger user event when H2 conn preface & SETTINGS frame are sent
Motivation:
Previously client Http2ConnectionHandler trigger a user event
immediately when the HTTP/2 connection preface is sent. Any attempt to
immediately send a new request could cause the server to terminate the
connection, as it might not have received the SETTINGS frame from the
client. Per RFC7540 Section 3.5, the preface "MUST be followed by a
SETTINGS frame (Section 6.5), which MAY be empty."
(https://tools.ietf.org/html/rfc7540#section-3.5)

This event could be made more meaningful if it also indicates that the
initial client SETTINGS frame has been sent to signal that the channel
is ready to send new requests.

Modification:
- Renamed event to Http2ConnectionPrefaceAndSettingsFrameWrittenEvent.
- Modified Http2ConnectionHandler to trigger the user event only if it
  is a client and it has sent both the preface and SETTINGS frame.

Result:
It is now safe to use the event as an indicator that the HTTP/2
connection is ready to send new requests.
2017-10-24 09:17:06 +02:00
Norman Maurer
55b501d0d4 Correctly update Channel writability when queueing data in SslHandler.
Motivation:

A regression was introduced in 86e653e which had the effect that the writability was not updated for a Channel while queueing data in the SslHandler.

Modifications:

- Factor out code that will increment / decrement pending bytes and use it in AbstractCoalescingBufferQueue and PendingWriteQueue
- Add test-case

Result:

Channel writability changes are triggered again.
2017-10-24 09:13:15 +02:00
Norman Maurer
d8e187ff2c Ensure setting / getting the traffic class on an ipv4 only system works when using the native transport.
Motivation:

We tried to set IPV6 opts on an ipv4 only system and so failed to set / get the traffic opts. This resulted in a test-error when trying to compile netty on ipv4 only systems.

Modifications:

Use the correct opts depending on if the system is ipv4 only or not.

Result:

Be able to build and use on ipv4 only systems.
2017-10-24 09:03:28 +02:00
Nikolay Fedorovskikh
dcc39e5b21 Fixes a LoggingHandler#format method with two arguments
Motivation:
Bug in capacity calculation: occurs auto convert to string instead of sum up.

Modifications:
Use `eventName.length()` in sum.

Result:
Less trash in logs.
2017-10-24 06:56:54 +02:00
Norman Maurer
521e87984d SslHandler.setHandshakeTimeout*(...) should also been enforced on the server side.
Motivation:

We should also enforce the handshake timeout on the server-side to allow closing connections which will not finish the handshake in an expected amount of time.

Modifications:

- Enforce the timeout on the server and client side
- Add unit test.

Result:

Fixes [#7230].
2017-10-23 19:33:07 +02:00
Scott Mitchell
dcb828f02f DnsResolver CNAME redirect bug
Motviation:
DnsNameResolverContext#followCname attempts to build a query to follow a CNAME, but puts the original hostname in the DnsQuery instead of the CNAME hostname. This will result in not following CNAME redirects correctly.

Result:
- DnsNameResolverContext#followCname should use the CNAME instead of the original hostname when building the DnsQuery

Result:
More correct handling of redirect queries.
2017-10-23 09:37:32 -07:00
Nikolay Fedorovskikh
dc98eae5a5 Correct filling an origin header for WS client
Motivation:
An `origin`/`sec-websocket-origin` header value in websocket client is filling incorrect in some cases:
- Hostname is not converting to lower-case as prescribed by RFC 6354 (see [1]).
- Selecting a `http` scheme when source URI has `wss`/`https` scheme and non-standard port.

Modifications:
- Convert uri-host to lower-case.
- Use a `https` scheme if source URI scheme is `wss`/`https`, or if source scheme is null and port == 443.

Result:
Correct filling an `origin` header for WS client.

[1] https://tools.ietf.org/html/rfc6454#section-4
2017-10-23 11:38:34 +02:00
Idel Pivnitskiy
d93c607f93 Update links for actual Protobuf repo and documentation
Motivation:

Use actual links to new locations of Protobuf repo and documentation to
avoid problems when redirect will not work.

Modification:

Links in comments and all/pom.xml

Result:

Correct links to Protobuf resources
2017-10-23 07:41:35 +02:00
Norman Maurer
740c68faed Add supresswarnings to cleanup 16b1dbdf92.
Motivation:

We should add @SupressWarnings

Modifications:

Add annotations.

Result:

Less warnings
2017-10-22 18:16:46 +02:00
Idel Pivnitskiy
4793daa589 Make Comparators Serializable
Motivation:

Objects of java.util.TreeMap or java.util.TreeSet will become
non-Serializable if instantiated with Comparators, which are not also
 Serializable. This can result in unexpected and difficult-to-diagnose
 bugs.

Modifications:

Implements Serializable for all classes, which implements Comparator.

Result:

Proper Comparators which will not force collections to
non-Serializable mode.
2017-10-22 03:40:28 +02:00
Idel Pivnitskiy
50a067a8f7 Make methods 'static' where it possible
Motivation:

Even if it's a super micro-optimization (most JVM could optimize such
 cases in runtime), in theory (and according to some perf tests) it
 may help a bit. It also makes a code more clear and allows you to
 access such methods in the test scope directly, without instance of
 the class.

Modifications:

Add 'static' modifier for all methods, where it possible. Mostly in
test scope.

Result:

Cleaner code with proper 'static' modifiers.
2017-10-21 14:59:26 +02:00
Nikolay Fedorovskikh
17e1a26d64 Fixes a javadoc for ByteBufUtil#copy method
Motivation:
Javadoc of the `ByteBufUtil#copy(AsciiString, int, ByteBuf, int, int)` is incorrect.

Modifications:
Fix it.

Result:
The description of the `#copy` method is not misleading.
2017-10-21 14:51:20 +02:00
Nikolay Fedorovskikh
09dd6a5d4d Minor improvements in ByteBufOutputStream
Motivation:
In the `ByteBufOutputStream` we can use an appropriate methods of `ByteBuf`
to reduce calls of virtual methods and do not copying converting logic.

Modifications:
- Use an appropriate methods of `ByteBuf`
- Remove redundant conversions (int -> byte, int -> char).
- Use `ByteBuf#writeCharSequence` in the `writeBytes(String)'.

Result:
Less code duplication. A `writeBytes(String)` method is faster.
No unnecessary conversions. More consistent and cleaner code.
2017-10-21 14:44:13 +02:00
Idel Pivnitskiy
558097449c Add missed 'serialVersionUID' field for Serializable classes
Motivation:

Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.

Modifications:

Add missed 'serialVersionUID' field for all Serializable
classes.

Result:

Proper deserialization of previously serialized objects.
2017-10-21 14:41:18 +02:00
Cory Benfield
1b0a545921 Do not send Content-Length: 0 on 101 responses.
Motivation:

During code read of the Netty codebase I noticed that the Netty
HttpServerUpgradeHandler unconditionally sets a Content-Length: 0
header on 101 Switching Protocols responses. This explicitly
contravenes RFC 7230 Section 3.3.2 (Content-Length), which notes
that:

    A server MUST NOT send a Content-Length header field in any
    response with a status code of 1xx (Informational) or 204
    (No Content).

While it is unlikely that any client will ever be confused by
this behaviour, there is no reason to contravene this part of the
specification.

Modifications:

Removed the line of code setting the header field and changed the
only test that expected it to be there.

Result:

When performing the server portion of HTTP upgrade, the 101
Switching Protocols response will no longer contain a
Content-Length: 0 header field.
2017-10-21 14:36:19 +02:00
Carl Mastrangelo
16b1dbdf92 Motivation: Resource Leak Detector (RLD) tries to helpfully indicate where an object was last accessed and report the accesses in the case the object was not cleaned up. It handles lightly used objects well, but drops all but the last few accesses.
Configuring this is tough because there is split between highly shared (and accessed) objects and lightly accessed objects.

Modification:
There are a number of changes here.  In relative order of importance:

API / Functionality changes:
* Max records and max sample records are gone.  Only "target" records, the number of records tries to retain is exposed.
* Records are sampled based on the number of already stored records.  The likelihood of recording a new sample is `2^(-n)`, where `n` is the number of currently stored elements.
* Records are stored in a concurrent stack structure rather than a list.  This avoids a head and tail.  Since the stack is only read once, there is no need to maintain head and tail pointers
* The properties of this imply that the very first and very last access are always recorded.  When deciding to sample, the top element is replaced rather than pushed.
* Samples that happen between the first and last accesses now have a chance of being recorded.  Previously only the final few were kept.
* Sampling is no longer deterministic.  Previously, a deterministic access pattern meant that you could conceivably always miss some access points.
* Sampling has a linear ramp for low values and and exponentially backs off roughly equal to 2^n.  This means that for 1,000,000 accesses, about 20 will actually be kept.  I have an elegant proof for this which is too large to fit in this commit message.

Code changes:
* All locks are gone.  Because sampling rarely needs to do a write, there is almost 0 contention.  The dropped records counter is slightly contentious, but this could be removed or changed to a LongAdder.  This was not done because of memory concerns.
* Stack trace exclusion is done outside of RLD.  Classes can opt to remove some of their methods.
* Stack trace exclusion is faster, since it uses String.equals, often getting a pointer compare due to interning.  Previously it used contains()
* Leak printing is outputted fairly differently.  I tried to preserve as much of the original formatting as possible, but some things didn't make sense to keep.

Result:
More useful leak reporting.

Faster:
```
Before:
Benchmark                                           (recordTimes)   Mode  Cnt       Score      Error  Units
ResourceLeakDetectorRecordBenchmark.record                      8  thrpt   20  136293.404 ± 7669.454  ops/s
ResourceLeakDetectorRecordBenchmark.record                     16  thrpt   20   72805.720 ± 3710.864  ops/s
ResourceLeakDetectorRecordBenchmark.recordWithHint              8  thrpt   20  139131.215 ± 4882.751  ops/s
ResourceLeakDetectorRecordBenchmark.recordWithHint             16  thrpt   20   74146.313 ± 4999.246  ops/s

After:
Benchmark                                           (recordTimes)   Mode  Cnt       Score      Error  Units
ResourceLeakDetectorRecordBenchmark.record                      8  thrpt   20  155281.969 ± 5301.399  ops/s
ResourceLeakDetectorRecordBenchmark.record                     16  thrpt   20   77866.239 ± 3821.054  ops/s
ResourceLeakDetectorRecordBenchmark.recordWithHint              8  thrpt   20  153360.036 ± 8611.353  ops/s
ResourceLeakDetectorRecordBenchmark.recordWithHint             16  thrpt   20   78670.804 ± 2399.149  ops/s
```
2017-10-19 12:21:21 -07:00
Norman Maurer
9bcf31977c Fail the connectPromise with the correct exception if the connection is refused when using the native kqueue transport.
Motivation:

Due a bug we happen to sometimes fail the connectPromise with a ClosedChannelException when using the kqueue transport and the remote peer refuses the connection. We need to ensure we fail it with the correct exception.

Modifications:

Call finishConnect() before calling close() to ensure we preserve the correct exception.

Result:

KQueueSocketConnectionAttemptTest.testConnectionRefused will pass always on macOS.
2017-10-07 21:33:26 +02:00