Motivation:
KQueue implementations current have inconsistent behavior with Epoll implementations with respect to asynchronous sockets and connecting. In the Epoll transport we attempt to connect, if the connect call does not synchornously fail/succeed we set the EPOLLOUT which will be triggered by the kernel if the connection attempt succeeds or an error occurs. The connect API provides no way to asynchronously communicate an error so the Epoll implementation fires a EPOLLOUT event and puts the connect status in getsockopt(SO_ERROR). KQueue provides the same APIs but different behavior. If the EVFILT_WRITE is not enabled and the EVFILT_READ is enabled before connect is called, and there is an error the kernel may fire the EVFILT_READ filter and provide the Connection Refused error via read(). This is even true if we set the EVFILT_WRITE filter after calling connect because connect didn't synchornously complete. After the error has been delievered via read() a call to getsockopt(SO_ERROR) will return 0 indicating there is no error. This means we cannot rely upon the KQueue based kernel to deliver connection errors via the EVFILT_WRITE filter in the same way that the linux kernel does with the EPOLLOUT flag.
ce241bd introduced a change which depends upon the behavior of the EVFILT_WRITE being set and may prematurely stop writing to the OS as a result, becaues we assume the OS will notify us when the socket is writable. However the current work around for the above described behavior is to initialize the EVFILT_WRITE to true for connection oriented protocols. This leads to prematurely exiting from the flush() which may lead to deadlock.
Modifications:
- KQueue should check when an error is obtained from read() if the connectPromise has not yet been completed, and if not complete it with a ConnectException
Result:
No more deadlock in KQueue due to asynchronous connect workaround.
Motivation:
When we do DNS queries we need to ensure we always release the AddressEnvelope.
Modifications:
Also release the AddressEnvelope if the original resolution was done in the meantime and we did not cancel the extra query yet.
Result:
Should fix [#7713]
Motivation:
The Snappy decoder was failing on valid inputs containing literals
with 2-byte lengths > 0x8000 or copies with 2-byte offsets >= 0x8000.
The decoder was also enforcing an artificially low offset limit of
0x7FFF, something the Snappy format description advises against,
and which prevents decoding valid inputs generated by other encoders.
Modifications:
Interpret 2-byte literal lengths and 2-byte copy offsets as unsigned
shorts, in accordance with the format description and reference
implementation.
Allow any positive offset value. Throw an appropriate exception
for negative values (which can theoretically occur due to arithmetic
overflow on 4-byte offsets, but are unlikely to occur in the wild).
Result:
The Snappy decoder can handle valid inputs that previously caused
it to throw exceptions.
Motivation:
b215794de3 recently introduced a change in behavior where writeSpinCount provided a limit for how many write operations were attempted per flush operation. However when the write quantum was meet the selector write flag was not cleared, and the channel unsafe flush0 method has an optimization which prematurely exits if the write flag is set. This may lead to no write progress being made under the following scenario:
- flush is called, but the socket can't accept all data, we set the write flag
- the selector wakes us up because the socket is writable, we write data and use the writeSpinCount quantum
- we then schedule a flush() on the EventLoop to execute later, however it the flush0 optimization prematurely exits because the write flag is still set
In this scenario the socket is still writable so the EventLoop may never notify us that the socket is writable, and therefore we may never attempt to flush data to the OS.
Modifications:
- When the writeSpinCount quantum is exceeded we should clear the selector write flag
Result:
Fixes https://github.com/netty/netty/issues/7729
Motivation:
CharSequenceValueConverter#convertToBoolean has a few manual conditionals which can be removed if we use AsciiString.contentEqualsIgnoreCase. Also by comparing an AsciiString to a String we will incur conversions to char that can be avoided if we compare against AsciiString.
Modifications:
- Use AsciiString.contentEqualsIgnoreCase
- Compare against a AsciiString
Result:
Simplified CharSequenceValueConverter#convertToBoolean which favors AsciiString comparison.
Motivation:
It is not clear why Unsafe is unavailable when it is explicitly
disabled, or when Netty thinks it is running on Android.
Modification:
Change the "has" fields and methods to be causes. A null cause
means Unsafe is present. This catches all possible reason why
Unsafe might not be available.
Result:
Easier to debug Netty start up when logging cannot be turned on.
Motivation:
When SSL handshake fails, the connection should be closed. This is not true anymore after 978a46c.
Modifications:
- Ensure we always flush and close the channel on handshake failure.
- Add testcase.
Result:
Fixes [#7724].
Motivation:
To avoid eager allocation of the destination and to perform length prefixed encoding of UTF-8 string with forward only access pattern
Modifications:
The original writeUtf8 is modified by allowing customization of the reserved bytes on the destination buffer and is introduced an exact UTF-8 length estimator.
Result:
Is now possible to perform length first encoding with UTF-8 well-formed char sequences following a forward only write access pattern on the destination buffer.
Motivation:
We need to update jetty-alpn-agent to support java 1.8.0_162 while running our tests / examples.
Modifications:
Update jetty-alpn-agent to 2.0.7
Result:
All tests alpn related tests work again on latest java8 version
Motivation:
Profiling tcnative SSL code showed a non trivial percentage (1%)
of time spent in JNI code for InstaceOf. This turned out to be
from `Buffer.address` which makes a JNI call, which safely checks
on each call that The ByteBuffer is direct.
Modification:
Prefer using the address field of the pojo rather than looking it
up with JNI. This is the same approach taken by the `OpenSsl`
class.
Result:
Less JNI overhead
Motivation:
If you pass the output of CharSequenceValueConvert.convertToTimeMillis to convertTimeMillis it will throw a ParseException.
Modifications:
- Correctly implement CharSequenceValueConverter.convertTimeMillis
- Add unit-tests for CharSequenceValueConverter
Result:
Correctly convert timemillis.
Motivation:
HeaderEntry.equals() inherets Object.equals() which simply check if two objects are the same.
So it returns false even when two HeaderEntry objects have the same name and value.
Modifications:
Implement HeaderEntry.equals() that follows the specification of Map.Entry.equals().
https://docs.oracle.com/javase/9/docs/api/java/util/Map.Entry.html#equals-java.lang.Object-
Result:
HeaderEntry.equals() returns true if two HeaderEntry objects have the same name and value.
Motivation:
HttpHeaders.getBoolean should return the same truth value for the same string value, regardless of the underlying type.
Modifications:
- Only treat values of true as Boolean.TRUE
- Add unit tests.
Result:
Consistent converting of values for all CharSequence implementations.
Motivation:
Allow the observation of SETTINGS frame by other handlers in the pipeline. For my particular use case this allows me to observe the value of MAX_CONCURRENT_STREAMS for a ChannelPool abstraction that supports HTTP/2 multiplexing. Beside this also forward GOAWAY frames.
Modification:
Always forward SETTINGS and GOAWAY frames
Result:
Settings / Goaway can now be observed in the parent channel. Previously it was not possible (to my knowledge) to capture the settings when using Http2MultiplexCodec.
Motivation:
Headers.get* methods should not throw an exception but return null or the default value if converting of the value fails.
Modifications:
- Correctly handle the case when ValueConverter throws an Exception.
- Add testcase.
Result:
Fixes [#7710].
Motivation:
DefaultPromise's internal state depends upon specific Signal objects. These Signal objects can be used externally which causes the DefaultPromise object API to not function correct and state to become corrupted.
Modifications:
- DefaultPromise shouldn't depend upon Signal for its internal state
Result:
Fixes https://github.com/netty/netty/issues/7707
Motivation:
ByteBufUtil by default will cache DirectByteBuffer objects, and the
associated direct memory (up to 64k). In combination with the Recycler which may
cache up to 32k elements per thread may lead to a large amount of direct
memory being retained per EventLoop thread. As traffic spikes come this
may be perceived as a memory leak because the memory in the Recycler
will never be reclaimed.
Modifications:
- By default we shouldn't cache DirectByteBuffer objects.
Result:
Less direct memory consumption due to caching DirectByteBuffer objects.
Motivation:
NioDatagramChannel attempts to unpack a AddressedEnvelope and unconditionally uses internalNioBuffer. However if the ByteBuf is a CompositeByteBuf with more than 1 components, the write will fail and throw an exception.
Modifications:
- NioDatagramChannel should check the nioBufferCount before attempting
to use internalNioBuffer
Result:
No more failure to write UDP packets on NIO when a CompositeByteBuf is
used.
Motivation:
The code did reflection every method call which made the code slower and
harder to read with additional cases to consider.
Modifications:
Instead of loading the method and then throwing it away, save the Method
reference instead of the Class reference. Then also use more precise
exception handling for the method invocation.
Result:
Simpler, speedier code.
Motivation:
The Recycler currently retains 32k objects per thread by default. The Recycler is used in more than just one place and may result in large amounts of memory bloat if spikes of traffic are observed.
Modifications:
- Reduce the Recyclers default capacity from 32k to 4k.
Result:
- Lower default capacity of the Recycler and less memory retained.
Motivation:
Conscrypt is now 1.0. No more need to depend on release candidates.
Modifications:
Just the version bump. Things seemed compatible.
Result:
Depending on first guaranteed-api-stable release of Conscrypt.
Motivation:
Some java binaries include android classes on their classpath, even
if they aren't actually android. When this is true, `Unsafe` no
longer works, disabling the Epoll functionality. A sample case is
for binaries that use the j2objc library.
Modifications:
Check the `java.vm.name` instead of the classpath. Numerous
Google-internal Android libraries / binaries check this property
rather than the class path.
It is believed this is safe and works with bother ART and Dalvik
VMs, safe for Robolectric, and j2objc.
Results:
Unusually built java server binaries can still use Netty Epoll.
Motivation:
At the moment we use a ByteBuf as the payload for a http2 frame. This complicates life-time management a lot with no real gain and also may produce more objects then needed. We should just use a long as it is required to be 8 bytes anyway.
Modifications:
Use long for ping payloads.
Result:
Fixes [#7629].
Motivation:
Sometimes, it would be convenient to be able to easily enable all
supported cipher suites, regardless of security.
Currently, the only way it to retrieve all supported ciphers and pass
them explicitly.
Modification:
Introduce a new IdentityCipherSuiteFilter singleton that defaults to
supportedCiphers instead of defaultCiphers when ciphers are null.
Result:
Convenient way to enabled all supported cipher suites.
Motivation:
Currently if user call set/remove/set/remove many times, it will create multiple cleaner task for same index. It may cause OOM due to long live thread will have more and more task in LIVE_SET.
Modification:
Add flag to avoid recreating tasks.
Result:
Only create 1 clean task. But use more space of indexedVariables.
Since 3.1.1 mqtt protocol version SUBACK message can now indicate the failure in payload.
Modification:
Do not erase failure payload in for SUBACK message.
Result:
Fixes#7665
Motivation:
When following a CNAME response DnsNameResovlerContext may issue a A and AAAA query. However the DnsNameResolverContext would have already issued a A and AAAA query to get the CNAME response, and this may result in 2 additional A/AAAA queries per CNAME response.
Modifications:
- DnsNameResovlerContext#followCname shouldn't issue 2 queries, but instead just a single query with the same record type as the original query
Result:
No more duplicate queries as a result of CNAME responses.
Motivation:
There is some cleanup that can be done.
Modifications:
- Use intializer list expression where possible
- Remove unused imports.
Result:
Cleaner code.
Motivation:
We need the memoryAddress of a direct buffer when using our native transports. For this reason ReadOnlyUnsafeDirectByteBuf.memoryAddress() should not throw.
Modifications:
- Correctly override ReadOnlyUnsafeDirectByteBuf.memoryAddress() and hasMemoryAddress()
- Add test case
Result:
Fixes [#7672].
Motivation:
In google/conscrypt#313 the Conscrypt.Engines class was removed in favor
of methods directly on Conscrypt and overloading. The Conscrypt-using
code in Netty used reflection to access the old API, that doesn't exist
anymore. And thus recent versions of Conscrypt fail to enable things
like ALPN with Netty.
Modifications:
Instead of calling Conscrypt.Engines.isConscrypt, call
Conscrypt.isConscrypt.
Result:
Conscrypt detected properly at runtime.
Motivation:
DnsServerAddressStream provides an iterator like interface but maybe expected to start at a specific point upon each new usage. If a DnsServerAddressStream is re-used in multiple independent iterations the order of iteration maybe incorrect. DnsNameResolverContext has a fallback DnsServerAddressStream reference if the cache doesn't contain a hit, but it is shared across multiple independent iterations. This may lead to undesirable DNS query order.
Modifications:
- DnsNameResolverContext#getNameServers should duplicate the default DnsServerAddressStream
Result:
Consistent iteration over the default DnsServerAddressStream in DnsNameResolverContext.
Motivation:
JdkSslContext builds the list of supported cipher suites, but assumes that ciphers prefixed with SSL_ and TLS_ will be interchangeable. However this is not the case and only applies to a small subset of ciphers. This results in the JdkSslContext attempting to use unsupported ciphers.
Modifications:
- When building the list of ciphers in JdkSslContext we should first check if the engine supports the TLS_ prefix cipher.
Result:
Fixes https://github.com/netty/netty/issues/7673
Motivation:
SslHandler#decode methods catch any exceptions and attempt to wrap
before shutting down the engine. The intention is to write any alerts
which the engine may have pending. However the wrap process may also
attempt to write user data, and may also complete the associated
promises. If this is the case, and a promise listener closes the channel
then SslHandler may later propagate a SslHandshakeCompletionEvent user
event through the pipeline. Since the channel has already been closed
the user may no longer be paying attention to user events.
Modifications:
- Sslhandler#decode should first fail the associated handshake promise
and propagate the SslHandshakeCompletionEvent before attempting to wrap
Result:
Fixes https://github.com/netty/netty/issues/7639
Motivation:
When following a CNAME it is possible there are multiple name servers to query against. However DnsNameResolverContext#followCname explicitly only uses the first name server address when attempting the query. This may lead to resolution failures because we didn't try all the available name servers.
Modifications:
DnsNameResolverContext#followCname should not just try the first name server, but it should try all name servers
Result:
More complete CNAME resolution.
Motivation:
We saw some timeouts on the CI when the leak detection is enabled.
Modifications:
- Use smaller number of operations in test
- Increase timeout
Result:
CI not times out.
Motivation:
At the moment DefaultDnsCache will expire each record dependong on its own TTL. This may result in unexpected results for the end-user especially if the user for example uses IPV4_PREFERED but the cached AAAA records has a higher TTL then the A records and so the A record was removed. In this case we would only return the AAAA record and not even try to refresh.
Modifications:
Always expire all records for a hostname when one TTL is reached.
Result:
Fixes [#7329]
Motivation:
ByteBufUtil.isText(...) may produce unexpected results if called concurrently on the same ByteBuffer.
Modifications:
- Don't use internalNioBuffer where it is not safe.
- Add unit test.
Result:
ByteBufUtil.isText is thread-safe.
Motivation:
Reflective setAccessible(true) will produce scary warnings on the console when using java9+, while netty still works. That said users may feel uncomfortable with these warnings, we should not try to do it by default when using java9+.
Modifications:
Add io.netty.tryReflectionSetAccessible system property which controls if setAccessible(...) will be used. By default it will bet set to false when using java9+.
Result:
Fixes [#7254].
Motivation:
According to the section 4d of ASLv2:
If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
Modifications:
- Add a subset of NOTICE.txt of Apache Harmony.
Result:
- Fixes#7588
Motivation:
According to the spec:
All pseudo-header fields MUST appear in the header block before regular
header fields. Any request or response that contains a pseudo-header
field that appears in a header block after
a regular header field MUST be treated as malformed (Section 8.1.2.6).
Pseudo-header fields are only valid in the context in which they are defined.
Pseudo-header fields defined for requests MUST NOT appear in responses;
pseudo-header fields defined for responses MUST NOT appear in requests.
Pseudo-header fields MUST NOT appear in trailers.
Endpoints MUST treat a request or response that contains undefined or
invalid pseudo-header fields as malformed (Section 8.1.2.6).
Clients MUST NOT accept a malformed response. Note that these requirements
are intended to protect against several types of common attacks against HTTP;
they are deliberately strict because being permissive can expose
implementations to these vulnerabilities.
Modifications:
- Introduce validation in HPackDecoder
Result:
- Requests with unknown pseudo-field headers are rejected
- Requests with containing response specific pseudo-headers are rejected
- Requests where pseudo-header appear after regular header are rejected
- h2spec 8.1.2.1 pass
Motivation:
We should convert Http2Exceptions that are produced because of STREAM_CLOSED to ClosedChannelException when hand-over to the child channel to make it more consistent with other transports.
Modifications:
- Check if STREAM_CLOSED is used and if so create a new ClosedChannelException (while preserve the original exception as cause) and use it in the child channel
- Ensure STREAM_CLOSED is used in DefaultHttp2RemoteFlowController when writes are failed because of a closed stream.
- Add testcase
Result:
More consistent and correct exception usage.
Motivation:
The methods implement io.netty.util.concurrent.Future#cancel(boolean mayInterruptIfRunning) which actually ignored the param mayInterruptIfRunning.We need to add comments for the `mayInterruptIfRunning` param.
Modifications:
Add comments for the `mayInterruptIfRunning` param.
Result:
People who call the `cancel` method will be more clear about the effect of `mayInterruptIfRunning` param.
Motivation:
The netty-all artifact doesn't have a Automatic-Module-Name defined in the manifest like the rest of the projects do, resulting in requires netty.all.
Modification:
Add Automatic-Module-Name
Result:
Correctly work as java9 module.
Motivation:
When checking if a value is present, ReadOnlyHttp2Headers always ignores
case for values.
RFC 7540 says: https://tools.ietf.org/html/rfc7540#section-8.1.2
"header field names are strings of ASCII characters that are compared in a case-insensitive fashion"
But there is no such constraint on header values
Modifications:
Updated ReadOnlyHttp2Headers.contains to compare header value in a
case-sensitive way.
Result:
ReadOnlyHttp2Headers compares header names in a case-insensitive way,
values in a case-sensitive way.
Motivation:
If you test a header value providing a String, contains() returns false.
This is due to the implementation inherited from DefaultHeaders using
the JAVA_HASHER.
JAVA_HASHER.equals returns false because a is a String and b an
AsciiString.
Modifications:
DefaultHttp2Headers overrides contains and uses CASE_SENSITIVE_HASHER.
Result:
You can test a header value with any CharSequence implementation.