Motivation:
If an event occurs which generates non-application data (shutdown, handshake failure, alert generation, etc...) and the non-application buffer in the ByteBuffer BIO is full (or sufficiently small) we may not propagate all data to our peer before tearing down the socket.
Modifications:
- when wrap() detects the outbound is closed, but there is more data pending in the non-application buffers, we must also check if OpenSSL will generate more data from calling SSL_shutdown again
- when wrap() detects a handshakeExcpetion during failure we should check if OpenSSL has any pending data (in addition to the non-application buff) before throwing the handshake exception
Result:
OpenSslEngine more reliably transmits data to the peer before closing the socket.
Motiviation:
Simplify implementation of compareTo/equals/hashCode for ChannelIds.
Modifications:
We simplfy the hashCode implementation for DefaultChannelId by not
making it random, but making it based on the underlying data. We fix the
compareTo implementation for DefaultChannelId by using lexicographic
comparison of the underlying data array. We fix the compareTo
implementation for CustomChannelId to avoid the possibility of overflow.
Result:
Cleaner code that is easier to maintain.
Motivation:
Java8 is out now for some time and JDK7 is no longer supported officially. We should remove all our backports and just use what the JDK provides us. This also will allow us to use intrinsics that are offered by the JDK implementations.
Modifications:
Remove all backports of jdk8 classes.
Result:
Use what the JDK offers us. This also fixes [#5458]
Motivation:
JdkOpenSslEngineInteroptTest.mySetupMutualAuthServerIsValidClientException(...) delegated to the wrong super method.
Modifications:
Fix delegate
Result:
Correct test-code.
Motivation:
Some version of openssl dont support the needed APIs to use a KeyManagerFactory. In this case we should skip the tests.
Modifications:
- Use assumeTrue(...) to skip tests that need a KeyManagerFactory and its not supported.
Result:
Tests pass on all openssl versions we support.
Motivation:
Today, the HTTP codec in Netty responds to HTTP/1.1 requests containing
an "expect: 100-continue" header and a content-length that exceeds the
max content length for the server with a 417 status (Expectation
Failed). This is a violation of the HTTP specification. The purpose of
this commit is to address this situation by modifying the HTTP codec to
respond in this situation with a 413 status (Request Entity Too
Large). Additionally, the HTTP codec ignores expectations in the expect
header that are currently unsupported. This commit also addresses this
situation by responding with a 417 status.
Handling the expect header is tricky business as the specification (RFC
2616) is more complicated than it needs to be. The specification defines
the legitimate values for this header as "100-continue" and defines the
notion of expectatation extensions. Further, the specification defines a
417 status (Expectation Failed) and this is where implementations go
astray. The intent of the specification was for servers to respond with
417 status when they do not support the expectation in the expect
header.
The key sentence from the specification follows:
The server MUST respond with a 417 (Expectation Failed) status if
any of the expectations cannot be met or, if there are other
problems with the request, some other 4xx status.
That is, a server should respond with a 417 status if and only if there
is an expectation that the server does not support (whether it be
100-continue, or another expectation extension), and should respond with
another 4xx status code if the expectation is supported but there is
something else wrong with the request.
Modifications:
This commit modifies the HTTP codec by changing the handling for the
expect header in the HTTP object aggregator. In particular, the codec
will now respond with 417 status if any expectation other than
100-continue is present in the expect header, the codec will respond
with 413 status if the 100-continue expectation is present in the expect
header and the content-length is larger than the max content length for
the aggregator, and otherwise the codec will respond with 100 status.
Result:
The HTTP codec can now be used to correctly reply to clients that send a
100-continue expectation with a content-length that is too large for the
server with a 413 status, and servers that use the HTTP codec will now
no longer ignore expectations that are not supported (any value other
than 100-continue).
Motivation:
tcnative was moved into an internal package.
Modifications:
Update package for tcnative imports.
Result:
Use correct package names for tcnative.
Motivation:
We need to pass special arguments to run with jdk9 as otherwise some tests will not be able to run.
Modifications:
Allow to define extra arguments when running with jdk9
Result:
Tests pass with jdk9
Motivation:
As we now need to compile with java8 we should still allow to run the tests with a different java version to ensure everythin also works with java 7 and 6.
Modifications:
Allow to pass "-DtestJavaHome" to the build and so use a different java version during running the tests.
Result:
Be able to run tests with different java versions.
Motivation:
If the OpenSslEngine has bytes pending in the non-application buffer and also generates wrapped data during the handshake then the handshake data will be missed. This will lead to a handshake stall and eventually timeout. This can occur if the non-application buffer becomes full due to a large certificate/hello message.
Modification:
- ReferenceCountedOpenSslEngine should not assume if no data is flushed from the non-application buffer that no data will be produced by the handshake.
Result:
New unit tests with larger certificate chains don't fail.
Motivation:
OpenSslEngineTest has unused imports and SSLEngineTest uses a fixed port which was used for debugging.
Modifications:
- Remove unused imports
- Use ephemeral port
Result:
Cleaner test code.
Modifications:
tcnative made some fixes and API changes related to setVerify. We should absorb these changes in Netty.
Modifications:
- Use tcnatives updated APIs
- Add unit tests to demonstrate correct behavior
Result:
Updated to latest tcnative code and more unit tests to verify expected behavior.
Motivation:
tcnative has updated how constants are defined and removed some constants which are either obsolete or now set directly in tcnative.
Modifications:
- update to compile against tcnative changes.
Result:
Netty compiles with tcnative options changes.
Motivation:
Initialization of PlatformDependent0 fails on Java 9 in static initializer when calling setAccessible(true).
Modifications:
Add RefelectionUtil which can be used to safely try if setAccessible(true) can be used or not and if not fail back to non reflection.
Result:
Fixed [#6345]
Motivation:
We can eliminate unnessary wrapping when call ByteBuf.asReadOnly() in some cases to reduce indirection.
Modifications:
- Check if asReadOnly() needs to create a new instance or not
- Add test cases
Result:
Less object creation / wrapping.
Motivation:
We should remove the restriction to only allow to call unwrap with a ByteBuffer[] whose cumulative length exceeds MAX_ENCRYPTED_PACKET_LENGTH.
Modifications:
Remove guard.
Result:
Fixes [#6335].
Motivation:
To use jboss-marshalling extra command-line arguments are needed on JDK9+ as it makes use of reflection internally.
Modifications:
Skip jboss-marshalling tests when running on JDK9+ and init of MarshallingFactory fails.
Result:
Be able to build on latest JDK9 release.
Motivation:
There were some warnings for the code in the ssl package.
Modifications:
- Remove not needed else blocks
- Use correctly base class for static usage
- Replace String.length() == 0 with String.isEmpty()
- Remove unused code
Result:
Less warnings and cleaner code.
Motivation:
We need to ensure we pass all tests when sun.misc.Unsafe is not present.
Modifications:
- Make *ByteBufAllocatorTest work whenever sun.misc.Unsafe is present or not
- Let Lz4FrameEncoderTest not depend on AbstractByteBufAllocator implementation details which take into account if sun.misc.Unsafe is present or not
Result:
Tests pass even without sun.misc.Unsafe.
Motivation:
We should only try to calculate the direct memory offset when sun.misc.Unsafe is present as otherwise it will fail with an NPE as PlatformDependent.directBufferAddress(...) will throw it.
This problem was introduced by 66b9be3a46.
Modifications:
Use offset of 0 if no sun.misc.Unsafe is present.
Result:
PooledByteBufAllocator also works again when no sun.misc.Unsafe is present.
Motivation:
ReadOnlyByteBufTest contains two tests which are missing the `@Test` annotation and so will never run.
Modifications:
Add missing annotation.
Result:
Tests run as expected.
Motivation:
EPOLL annotates some exceptions to provide the remote address, but the original exception is not preserved. This may make determining a root cause more difficult. The static EPOLL exceptions references the native method that failed, but does not provide a description of the actual error number. Without the description users have to know intimate details about the native calls and how they may fail to debug issues.
Modifications:
- annotated exceptions should preserve the original exception
- static exceptions should include the string description of the expected errno
Result:
EPOLL exceptions provide more context and are more useful to end users.
Motivation:
54c9ecf682 introduced a unit tests which attempted to exclude addresses which resolved to loop back addresses from an assert statement. This was done with a static check for localhost but depending on machine configuration it is possible for other interfaces to be resolved.
Modifications:
- Use InetAddress#isLoopbackAddress() instead of string match on localhost
Result:
DnsNameResolverTest#testNameServerCache is more reliable.
Motivation:
EpollRecvByteAllocatorHandle intends to override the meaning of "maybe more data to read" which is a concept also used in all existing implementations of RecvByteBufAllocator$Handle but the interface doesn't support overriding. Because the interfaces lack the ability to propagate this computation EpollRecvByteAllocatorHandle attempts to implement a heuristic on top of the delegate which may lead to reading when we shouldn't or not reading data.
Modifications:
- Create a new interface ExtendedRecvByteBufAllocator and ExtendedHandle which allows the "maybe more data to read" between interfaces
- Deprecate RecvByteBufAllocator and change all existing implementations to extend ExtendedRecvByteBufAllocator
- transport-native-epoll should require ExtendedRecvByteBufAllocator so the "maybe more data to read" can be propagated to the ExtendedHandle
Result:
Fixes https://github.com/netty/netty/issues/6303.
Motivation:
CipherSuiteConverter may throw a NPE if a cipher suite from OpenSSL does not match the precomputed regular expression for OpenSSL ciphers. This method shouldn't throw and instead just return null.
Modifications:
- if cacheFromOpenSsl(..) fails the conversion toJava should return null
Result:
Fixes https://github.com/netty/netty/issues/6336.
Motivation:
The resolver package had some changes late in the 4.1.CR phase and the intention was to mark this package as unstable until these interfaces solidify, but we forgot to mark the package and public classes with the unstable annotation.
Modifications:
- resolver package public interfaces and package-info should be annotated with @UnstableApi
Result:
The unstable nature of the resolver package is more clearly communicated.
Motivation:
The JDK uses gethostbyname for blocking hostname resoltuion. gethostbyname can be configured on Unix systems according to [1][2]. This may impact the name server that is used to resolve particular domains or just override the default fall-back resolver. DnsNameResolver currently ignores these configuration files which means the default resolution behavior is different than the JDK. This may lead to unexpected resolution failures which succeed when using the JDK's resolver.
Modifications:
- Add an interface which can override what DnsServerAddressStream to use for a given hostname
- Provide a Unix specific implementation of this interface and implement [1][2]. Some elements may be ignored sortlist, timeout, etc...
Result:
DnsNameResolver behaves more like the JDK resolver by default.
[1] https://linux.die.net/man/5/resolver
[2] https://developer.apple.com/legacy/library/documentation/Darwin/Reference/ManPages/man5/resolver.5.html
Motivation:
Update of Groovy is needed to compile on recent java9 releases.
Modification:
Update to Groovy 2.4.8
Result:
This change allows Netty to be successfully compiled on more recent Java 9 previews.
Motivation:
HostsFileParser only retains the first address for each given hostname.
This is wrong, and it’s allowed to have both an IPv4 and an IPv6.
Modifications:
* Have `HostsFileParser` now return a `HostsFileEntries` that contains IPv4 entries and IPv6 entries
* Introduce `ResolvedAddressTypes` to describe resolved address types preferences
* Add a new `ResolvedAddressTypes` parameter to `HostsFileEntriesResolver::address` to account for address types preferences
* Change `DnsNameResolver` constructor to take a `ResolvedAddressTypes`, allowing for a null value that would use default
* Change `DnsNameResolverBuilder::resolvedAddressTypes` to take a `ResolvedAddressTypes`
* Make `DnsNameResolver::resolvedAddressTypes` return a `ResolvedAddressTypes`
* Add a static `DnsNameResolverBuilder::computeResolvedAddressTypes` to ease converting from `InternetProtocolFamily`
Result:
We now support hosts files that contains IPv4 and IPv6 pairs for a same
hostname.
Motivation:
Netty 4.1 introduced AsciiString and defines HttpHeaderNames constants
as such.
It would be convenient to be able to pass them to `exposeHeaders` and
`allowedRequestHeaders` directly without having to call `toString`.
Modifications:
Add `exposeHeaders` and `allowedRequestHeaders` overloads that take a
`CharSequence`.
Result:
More convenient API
Motivation:
Result of validatePromise() is always inverted with if (!validatePromise()).
Modification:
validatePromise() renamed to isNotValidPromise() and now returns inverted state so you don't need to invert state in conditions. Also name is now more meaningful according to returned result.
Added more tests for validatePromise corner cases with Exceptions.
Result:
Code easier to read. No need in inverted result.
Motivation:
Currently Netty utilizes BIO_new_bio_pair so we can control all FD lifetime and event notification but delegates to OpenSSL for encryption/decryption. The current implementation sets up a pair of BIO buffers to read/write encrypted/plaintext data. This approach requires copying of data from Java ByteBuffers to native memory BIO buffers, and also requires both BIO buffers to be sufficiently large to hold application data. If direct ByteBuffers are used we can avoid coyping to/from the intermediate BIO buffer and just read/write directly from the direct ByteBuffer memory. We still need an internal buffer because OpenSSL may generate write data as a result of read calls (e.g. handshake, alerts, renegotiation, etc..), but this buffer doesn't have to be be large enough to hold application data.
Modifications:
- Take advantage of the new ByteBuffer based BIO provided by netty-tcnative instead of using BIO_read and BIO_write.
Result:
Less copying and lower memory footprint requirement per TLS connection.
Motivation:
a416b79 introduced a check for null or empty host name to be compatible with the JDK resolution. However the doResolve(String, Promise) method, and if the doResolve(String, DnsRecord[], Promise, DnsCache) method was overridden the empty/null hostname would not be correctly resolved.
Modifications:
- Move the empty/null host name check into the lowest level doResolve method in DnsNameResolver
- Remove the duplicate logic in InetNameResolver.java which can be bypassed anyways
Result:
By default (unless behavior is overridden) DnsNameResolver resolves null/empty host names to local host just like the JDK.
Motivation:
DefaultCookie currently used an undocumented magic value for undefined
maxAge.
Clients need to be able to identify such value so they can implement a
proper CookieJar.
Ideally, we should add a `Cookie::isMaxAgeDefined` method but I guess
we can’t add a new method without breaking API :(
Modifications:
Add a new constant on `Cookie` interface so clients can use it to
compare with value return by `Cookie.maxAge` and decide if `maxAge` was
actually defined.
Result:
Clients have a better documented way to check if the maxAge attribute
was defined.
Motivation:
We need to ensure we release the AddressedEnvelope if we fail to notify the future (as it may be notified before because of an timeout). Otherwise we may leak.
Modifications:
Call release() if we fail to notify the future.
Result:
No more memory leak on notify failure.
Motivation:
ResourceLeakDetector supports a parameter called maxActive. This parameter is used in attempt to limit the amount of objects which are being tracked for leaks at any given time, and generates an error log message if this limit is exceeded. This assumes that there is a relationship between leak sample rate and object lifetime for objects which are already being tracked. This relationship may appear to work in cases were there are a single leak record per object and those leak records live for the lifetime of the application but in general this relationship doesn't exist. The original motivation was to provide a limit for cases such as HashedWheelTimer to limit the number of instances which exist at any given time. This limit is not enforced in all circumstances in HashedWheelTimer (e.g. if the thread is a daemon) and can be implemented outside ResourceLeakDetector.
Modifications:
- Deprecate all methods which interact with maxActive in ResourceLeakDetectorFactory and ResourceLeakDetector
- Remove all logic related to maxActive in ResourceLeakDetector
- HashedWheelTimer implements its own logic to impose a limit and warn users if too many instances exists at any given time.
Result:
Fixes https://github.com/netty/netty/issues/6225.
Motivation:
The SSLEngine wrap and unwrap methods can be called in a way that has no side effects, but this could involve costly validation and allocation. The SslHandler should avoid calling into these methods if possible.
Modifications:
- wrapNonAppData should provide additional status which can be used by wrap to breakout early if possible
Result:
SslHandler invokes the SSLEngine less.