Motivation:
Conscrypt is a Java Security provider that wraps OpenSSL (specifically BoringSSL). It's a possible alternative to Netty-tcnative that we should explore. So this commit is just to enable us to further investigate its use.
Modifications:
Modifying the SslContext creation path to support the Conscrypt provider.
Result:
Netty will support OpenSSL with conscrypt.
Motivation:
gRPC (and potentially other libraries) has an optimized header processor that requires direct access to the HpackDecoder.
Modifications:
Make the HpackDecoder and its constructors public.
Result:
Fixes#6579
Motivation:
We should limit the size of the allocated outbound buffer to MAX_ENCRYPTED_PACKET_LENGTH to ensure we not cause an OOME when the user tries to encrypt a very big buffer.
Modifications:
Limit the size of the allocated outbound buffer to MAX_ENCRYPTED_PACKET_LENGTH
Result:
Fixes [#6564]
Motiviation:
UnsafeByteBufUtil has some bugs related to using an incorrect index, and also omitting the array paramter when dealing with byte[] objects. There is also some simplification possible with respect to type casting, and minor formatting consistentcy issues.
Modifications:
- Ensure indexing is correct when dealing with native memory
- Fix the native access and endianness for the medium/unsigned medium methods
- Ensure array is used when dealing with heap memory
- Remove unecessary casts when using long
- Fix formating and alignment
Result:
UnsafeByteBufUtil is more correct and won't access direct memory when heap arrays are used.
Motivation:
When UNSAFE.allocateMemory is returning an address whose high bit is set we currently throw an IllegalArgumentException. This is not correct as it may return a negative number on at least sparc.
Modifications:
- Allow to pass in negative memoryAddress
- Add unit tests
Result:
Correctly validate the memoryAddress and so also work on sparc as expected. Fixes [#6574].
Motivation:
There are two files that still use `system.out.println` to log their status
Modification:
Replace `system.out.println` with a `debug` function inside an instance of `InternalLoggerFactory`
Result:
Introduce an instance of `InternalLoggerFactory` in class `AbstractMicrobenchmark.java` and `AbstractSharedExecutorMicrobenchmark.java`
Motivation:
Currently netty is receiving HTTP request by ByteBuf and store it as "CharSequence" on HttpObjectDecoder. During this operation, all character on ByteBuf is moving to char[] without breaking encoding.
But in process() function, type casting from byte to char does not consider msb (sign-bit). So the value over 127 can be casted wrong value. (ex : 0xec in byte -> 0xffec in char). This is type casting bug.
Modification:
Fix type casting
Result:
Non-latin characters work.
Motivation:
The widely used SSL Implementation, OpenSSL, already supports Heartbeat Extension; both sending and responding to Heartbeat Messages. But, since Netty is not recognizing that extension as valid packet, peers won't be able to use this extension.
Modification:
Update SslUtils.java to recognize Heartbeat Extension as valid tls packet.
Result:
With this change, softwares using Netty + OpenSSL will be able to respond for TLS Heartbeat requests (actually taken care by OpenSSL - no need of any extra implementation from Clients)
Motivation
Http2StreamChannel ignores options of its parent channel when being created. That leads to surprising results when, for example, unpooled allocator could be silently replaced with pooled allocator (default setting).
Modification
Copy parent channel's options over to the Http2StreamChannel.
Result
Channel options are now consistent between Http2StreamChannel and its parent channel. Newly added test passes on this branch and fails on master. Fixes#6551.
Motivation:
The code accidentally passes channel twice instead of value, resulting in logs like:
Failed to set channel option 'SO_SNDBUF' with value '[id: 0x2c5b2eb4]' for channel '[id: 0x2c5b2eb4]'
Modifications:
Pass value instead of channel where it needs to be.
Result:
Failed to set channel option 'SO_SNDBUF' with value '0' for channel '[id: 0x9bd3c5b8]'
Motivation:
The contract of `ByteBuf.writeBytes(ByteBuf src)` is such that it will
throw an `IndexOutOfBoundsException if `src.readableBytes()` is greater than
`this.writableBytes()`. The EmptyByteBuf class will throw the exception,
even if the source buffer has zero readable bytes, in violation of the
contract.
Modifications:
Use the helper method `checkLength(..)` to check the length and throw
the exception, if appropriate.
Result:
Conformance with the stated behavior of ByteBuf.
Motivation:
The updated HTTP/1.x RFC allows for header values to be CSV and separated by OWS [1]. CombinedHttpHeaders should remove this OWS on insertion.
[1] https://tools.ietf.org/html/rfc7230#section-7
Modification:
CombinedHttpHeaders doesn't account for the OWS and returns it back to the user as part of the value.
Result:
Fixes#6452
Motivation:
We should use SystemPropertyUtil to access system properties and so always handle SecurityExceptions.
Modifications:
Use SystemPropertyUtil everywhere.
Result:
Better and consist handling of SecurityException.
Motivation:
In an effort to better understand how the XmlFrameDecoder works, I consulted the tests to find a method that would reframe the inputs as per the Javadocs for that class. I couldn't find any methods that seemed to be doing it, so I wanted to add one to reinforce my understanding.
Modification:
Add a new test method to XmlFrameDecoder to assert that the reframing works as described.
Result:
New test method is added to XmlFrameDecoder
Motivation:
ChunkedWriteHandler queues written messages and actually writes them
when flush is called. In its doFlush method, it needs to flush after
each chunk is written to preserve memory. However, non-chunked messages
(those that aren't of type ChunkedInput) are treated in the same way,
which means that flush is called after each message is written.
Modifications:
Moved the call to flush() inside the if block that tests if the message
is an instance of ChunkedInput. To ensure flush is called at least once,
the existing boolean flushed is checked at the end of doFlush. This
check was previously in ChunkedWriteHandler.flush(), but wasn't checked in
other invocations of doFlush, e.g. in channelInactive.
Result:
When this handler is present in a pipeline, writing a series
of non-chunked messages will be flushed as the developer intended.
Motivation:
We used some deprecated Mockito methods.
Modifications:
- Replace deprecated method usage
- Some cleanup
Result:
No more usage of deprecated Mockito methods. Fixes [#6482].
Motivation:
DefaultHttp2ConnectionDecoder#onSettingsRead processes the settings, and then sends a SETTINGS ACK to the remote peer. Processing the settings may result in frames which violate the previous settings being send to the remote peer. The remote peer will not apply the new settings until it has received the SETTINGS ACK, and therefore we may violate the settings from the remote peer's perspective and the connection will be shutdown.
Modifications:
- We should send the SETTINGS ACK before we process the settings to ensure the peer receives the SETTINGS ACK before other frames which assume the settings have already been applied
Result:
Fixes https://github.com/netty/netty/issues/6520.
Motivation:
We forked a new process to detect if the program is run by root. We should better just use user.name system property
Modifications:
- Change PlatformDependent.isRoot0() to read the user.name system property to detect if root runs the program and rename it to maybeSuperUser0().
- Rename PlatformDependent.isRoot() to maybeSuperUser() and let it init directly in the static block
Result:
Less heavy way to detect if the program is run by root.
Motivation:
When UnorderedThreadPoolEventExecutor.execute / submit etc is called it will consume up to 100 % CPU even after the task was executed.
Modifications:
Add a special wrapper which we will be used in execute(...) to wrap the submitted Runnable. This is needed as ScheduledThreadPoolExecutor.execute(...) will delegate to submit(...) which will then use decorateTask(...). The problem with this is that decorateTask(...) needs to ensure we only do our own decoration if we not call from execute(...) as otherwise we may end up creating an endless loop because DefaultPromise will call EventExecutor.execute(...) when notify the listeners of the promise.
Result:
Fixes [#6507].
Motivation:
Some pipelines require support for both SSL and non-SSL messaging.
Modifications:
Add utility decoder to support both SSL and non-SSL handlers based on the initial message.
Result:
Less boilerplate code to write for developers.
Motivation:
Http2SecurityUtil currently lists HTTP/2 ciphers as documented by
JSSE docs [1] and the IANA [2] using the TLS_ prefix.
In some IBM J9 implementations the SSL_ prefix is used, which is also
covered by the JSSE.
[1] http://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html
[2] http://www.iana.org/assignments/tls-parameters/tls-parameters.xhtml
Modifications:
Add both variants of the cipher names (prefixed with SSL_ in additon to TLS_)
Result:
HTTP/2 connections can now be created using the SslProvider.JDK on IBM J9
and potentially other JVMs which use the SSL_ prefix.
Motivation:
Some classes have fields which can be local.
Modifications:
Convert fields to the local variable when possible.
Result:
Clean up. More chances for young generation or scalar replacement.
Motivation:
5e64985089 introduced support for the KeyManagerFactory while using OpenSSL. This same commit also introduced 2 calls to SSLContext.setVerify when 1 should be sufficient.
Modifications:
- Remove the duplicate call to SSLContext.setVerify
Result:
Less duplicate code in ReferenceCountedOpenSslServerContext.
Motivation:
PR [#6460] added a way to access the used memory of an allocator. The used naming was not very good and how things were exposed are not consistent.
Modifications:
- Add a new ByteBufAllocatorMetric and ByteBufAllocatorMetricProvider interface
- Let the ByteBufAllocator implementations implement ByteBufAllocatorMetricProvider
- Move exposed stats / metric from PooledByteBufAllocator to PooledByteBufAllocatorMetric and mark old methods as `@Deprecated`.
Result:
More consistent way to expose metric / stats for ByteBufAllocator
Motivation:
PlatformDependent0 makes assumptions that the array index scale for byte[] is always 1. If this is not the case the results from methods which make this assumption will be undefined.
Modifications:
- PlatformDependent0 should check if unsafe.arrayIndexScale(byte[].class) is not 1, and if so not use unsafe
Result:
Assumptions made by optimizations in PlatformDependent0 which use byte[] are explicitly enforced.
Motivation:
SslContext and SslContextBuilder do not support a way to specify the desired TLS protocols. This currently requires that the user extracts the SSLEngine once a context is built and manually call SSLEngine#setEnabledProtocols(String[]). Something this critical should be supported at the SslContext level.
Modifications:
- SslContextBuilder should accept a list of protocols to configure for each SslEngine
Result:
SslContext consistently sets the supported TLS/SSL protocols.
Motivation:
A previous commit added methods to AbstractHttp2ConnectionHandlerBuilder but forgot to expose them in Http2ConnectionHandlerBuilder.
Modifications:
- expose the new methods in Http2ConnectionHandlerBuilder
Result:
Http2ConnectionHandlerBuilder supports the new configuration options.
Motivation:
We not ship any forked code of akka and ArrayDeque anymore.
Modifications:
Remove reference from NOTICE.txt and license folder.
Result:
Correctly document license related things.
Motivation:
This pull request does not solve any problem but we find that several links in the code refer to project websites under the domain of http://code.google.com which are either moved to github or not maintained anymore.
Modification:
Update the project links from code.google.com to the relevant project in github.com
Motivaiton:
It is possible that if the OpenSSL library supports the interfaces required to use the KeyManagerFactory, but we fail to get the io.netty.handler.ssl.openssl.useKeyManagerFactory system property (or this property is set to false) that SSLEngineTest based unit tests which use a KeyManagerFactory will fail.
Modifications:
- We should check if the OpenSSL library supports the KeyManagerFactory interfaces and if the system property allows them to be used in OpenSslEngineTests
Result:
Unit tests which use OpenSSL and KeyManagerFactory will be skipped instead of failing.
Make the FileRegion comments about which transports are supported more accurate.
Also, eleminate any outstanding references to FileRegion.transfered as the method was renamed for spelling.
Modifications:
Class-level comment on FileRegion, can call renamed method.
Result:
More accurate documentation and less calls to deprecated methods.
Motivation:
We currently don't have a benchmark which includes SslHandler. The SslEngine benchmarks also always include a single TLS packet when encoding/decoding. In practice when reading data from the network there may be multiple TLS packets present and we should expand the benchmarks to understand this use case.
Modifications:
- SslEngine benchmarks should include wrapping/unwrapping of multiple TLS packets
- Introduce SslHandler benchmarks which can also account for wrapping/unwrapping of multiple TLS packets
Result:
SslHandler and SslEngine benchmarks are more comprehensive.
Motivation:
When we do a wrap operation we calculate the maximum size of the destination buffer ahead of time, and return a BUFFER_OVERFLOW exception if the destination buffer is not big enough. However if there is a CompositeByteBuf the wrap operation may consist of multiple ByteBuffers and each incurs its own overhead during the encryption. We currently don't account for the overhead required for encryption if there are multiple ByteBuffers and we assume the overhead will only apply once to the entire input size. If there is not enough room to write an entire encrypted packed into the BIO SSL_write will return -1 despite having actually written content to the BIO. We then attempt to retry the write with a bigger buffer, but because SSL_write is stateful the remaining bytes from the previous operation are put into the BIO. This results in sending the second half of the encrypted data being sent to the peer which is not of proper format and the peer will be confused and ultimately not get the expected data (which may result in a fatal error). In this case because SSL_write returns -1 we have no way to know how many bytes were actually consumed and so the best we can do is ensure that we always allocate a destination buffer with enough space so we are guaranteed to complete the write operation synchronously.
Modifications:
- SslHandler#allocateNetBuf should take into account how many ByteBuffers will be wrapped and apply the encryption overhead for each
- Include the TLS header length in the overhead computation
Result:
Fixes https://github.com/netty/netty/issues/6481
Motivation:
There are numerous usages of internalNioBuffer which hard code 0 for the index when the intention was to use the readerIndex().
Modifications:
- Remove hard coded 0 for the index and use readerIndex()
Result:
We are less susceptible to using the wrong index, and don't make assumptions about the ByteBufAllocator.
Motivation:
Lz4FrameEncoder uses internalNioBuffer but always passes in a value of 0 for the index. This should be readerIndex().
Modifications:
- change 0 to readerIndex()
Result:
More correct usage of internalNioBuffer in Lz4FrameEncoder.
Motivation:
ReferenceCountedOpenSslEngine#wrap must have a direct buffer for a destination to interact with JNI. If the user doesn't supply a direct buffer we internally allocate one to write the results of wrap into. After this operation completes we copy the contents of the direct buffer into the heap buffer and use internalNioBuffer to get the content. However we pass in the end index but the internalNioBuffer expects a length.
Modifications:
- pass the length instead of end index to internalNioBuffer
Result:
ReferenceCountedOpenSslEngine#wrap will copy the correct amount of data into the destination buffer when heap buffers are wrapped.
Motivation:
The internal.hpack classes are no longer exposed in our public APIs and can be made package private in the http2 package.
Modifications:
- Make the hpack classes package private in the http2 package
Result:
Less APIs exposed as public.
Motivation:
SslContextBuilder sill state the KeyManagerFactory and TrustManagerFactory are only supported when SslProvider.JDK is used. This is not correct anymore.
Modifications:
Fix javadocs.
Result:
Correct javadocs.