Motivation:
ResourceLeakDetector supports a parameter called maxActive. This parameter is used in attempt to limit the amount of objects which are being tracked for leaks at any given time, and generates an error log message if this limit is exceeded. This assumes that there is a relationship between leak sample rate and object lifetime for objects which are already being tracked. This relationship may appear to work in cases were there are a single leak record per object and those leak records live for the lifetime of the application but in general this relationship doesn't exist. The original motivation was to provide a limit for cases such as HashedWheelTimer to limit the number of instances which exist at any given time. This limit is not enforced in all circumstances in HashedWheelTimer (e.g. if the thread is a daemon) and can be implemented outside ResourceLeakDetector.
Modifications:
- Deprecate all methods which interact with maxActive in ResourceLeakDetectorFactory and ResourceLeakDetector
- Remove all logic related to maxActive in ResourceLeakDetector
- HashedWheelTimer implements its own logic to impose a limit and warn users if too many instances exists at any given time.
Result:
Fixes https://github.com/netty/netty/issues/6225.
Motivation:
The SSLEngine wrap and unwrap methods can be called in a way that has no side effects, but this could involve costly validation and allocation. The SslHandler should avoid calling into these methods if possible.
Modifications:
- wrapNonAppData should provide additional status which can be used by wrap to breakout early if possible
Result:
SslHandler invokes the SSLEngine less.
Motivation:
DefaultHttp2FrameReader contains the logic for how it parsed the network
traffic from http2 client,
it also validate the content is legal or not.
So keep high coverage rate on it will increase the stability of api.
Modifications:
Add unit test on DefaultHttp2FrameReader
Result:
Coverage rate increased
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
When comparing MAC addresses searching for the best MAC address, if
locally-administered address (e.g., from a Docker container) is compared
against an empty MAC address, the empty MAC address will be marked as
preferred. In cases this is the only available MAC address, this leaves
Netty using a random machine ID instead of using a perfectly valid
machine ID from the locally-adminstered address.
Modifications:
This commit modifies the MAC address logic so that the empty MAC address
is not preferred over a locally-administered address. This commit also
simplifies the comparison logic here.
Result:
Empty MAC addresses will not be preferred over locally-administered
addresses thus permitting the default machine ID to be the
locally-adminstered MAC address if it is the only available MAC address.
Motivation:
DnsNameResolver does not handle recursive DNS and so fails if you query a DNS server (for example a ROOT dns server) which provides the correct redirect for a domain.
Modification:
Add support for redirects (a.k.a. handling of AUTHORITY section').
Result:
Its now possible to use a DNS server that redirects.
Motivation:
NioDatagramChannel fails a write with NotYetConnectedException when the DatagramChannel was not yet connected and a ByteBuf is written. The same should be done for OioDatagramChannel as well.
Modifications:
Make OioDatagramChannel consistent with NioDatagramChannel
Result:
Correct and consistent implementations of DatagramChannel
Motivation:
64-byte alignment is recommended by the Intel performance guide (https://software.intel.com/en-us/articles/practical-intel-avx-optimization-on-2nd-generation-intel-core-processors) for data-structures over 64 bytes.
Requiring padding to a multiple of 64 bytes allows for using SIMD instructions consistently in loops without additional conditional checks. This should allow for simpler and more efficient code.
Modification:
At the moment cache alignment must be setup manually. But probably it might be taken from the system. The original code was introduced by @normanmaurer https://github.com/netty/netty/pull/4726/files
Result:
Buffer alignment works better than miss-align cache.
Motivation:
We not had tests for ByteBufAllocator implementations in general.
Modifications:
Added ByteBufAllocatorTest, AbstractByteBufAllocatorTest and UnpooledByteBufAllocatorTest
Result:
More tests for allocator implementations.
Motivation:
HttpObjectAggregator yields full HTTP messgaes (AggregatedFullHttpMessages) that don't respect decoder result when copied/replaced.
Modifications:
Copy the decoding result over to a new instance produced by AggregatedFullHttpRequest.replace or AggregatedFullHttpResponse.replace .
Result:
DecoderResult is now copied over when an original AggregatedFullHttpMessage is being replaced (i.e., AggregatedFullHttpRequest.replace or AggregatedFullHttpResponse.replace is being called).
New unit tests are passing on this branch but are failing on master.
Motiviation:
We should ensure we not need any extra wrap / unwrap calls during handshake once the handshake was signaled as finished
Modifications:
More strict testing
Result:
Better testing of handshake behaviour
Motivation:
Previous versions of netty-tcnative used the org.apache.tomcat namespace which could lead to problems when a user tried to use tomcat and netty in the same app.
Modifications:
Use netty-tcnative which now uses a different namespace and adjust code to some API changes.
Result:
Its now possible to use netty-tcnative even when running together with tomcat.
Motivation:
codec-http2 couples the dependency tree state with the remainder of the stream state (Http2Stream). This makes implementing constraints where stream state and dependency tree state diverge in the RFC challenging. For example the RFC recommends retaining dependency tree state after a stream transitions to closed [1]. Dependency tree state can be exchanged on streams in IDLE. In practice clients may use stream IDs for the purpose of establishing QoS classes and therefore retaining this dependency tree state can be important to client perceived performance. It is difficult to limit the total amount of state we retain when stream state and dependency tree state is combined.
Modifications:
- Remove dependency tree, priority, and weight related items from public facing Http2Connection and Http2Stream APIs. This information is optional to track and depends on the flow controller implementation.
- Move all dependency tree, priority, and weight related code from DefaultHttp2Connection to WeightedFairQueueByteDistributor. This is currently the only place which cares about priority. We can pull out the dependency tree related code in the future if it is generally useful to expose for other implementations.
- DefaultHttp2Connection should explicitly limit the number of reserved streams now that IDLE streams are no longer created.
Result:
More compliant with the HTTP/2 RFC.
Fixes https://github.com/netty/netty/issues/6206.
[1] https://tools.ietf.org/html/rfc7540#section-5.3.4
Motivation:
If an HTTP/2 client writes data before the connection preface the peer will shutdown the socket. Depending on what is in the pipeline (SslHandler) may require different evaluation criteria to infer when the codec-http2 has written the connection preface on behalf of the client. This can lead to unnecessarily complexity and error prone/racy application code.
Modifications:
- Introduce a user event that is fired up the pipeline when codec-http2 writes the connection preface
Result:
Reliable mechanism for applications to use to know when connection preface has been written (related to https://github.com/netty/netty/issues/6272).
Motivation:
PooledByteBuf.capacity(...) miss to enforce maxCapacity() and so its possible to increase the capacity of the buffer even if it will be bigger then maxCapacity().
Modifications:
- Correctly enforce maxCapacity()
- Add unit tests for capacity(...) calls.
Result:
Correctly enforce maxCapacity().
Motivation:
DnsNameResolver will return the domain / host name as ascii code using punycode (https://tools.ietf.org/html/rfc3492). This is different to what the JDK does which always convert it to unicode. We should do the same by default but allow to also not do it.
Modifications:
- Add new builder method on DnsNameResolverBuilder which allow to disable / enable converting. Default is to convert just like the JDK does.
- Add unit tests for it.
Result:
DnsNameResolver and JDK impl behave the same way.
Motivation:
When An HTTP server is listening in plaintext mode, it doesn't have
a chance to negotiate "h2" in the tls handshake. HTTP 1 clients
that are not expecting an HTTP2 server will accidentally a request
that isn't an upgrade, which the HTTP/2 decoder will not
understand. The decoder treats the bytes as hex and adds them to
the error message.
These error messages are hard to understand by humans, and result
in extra, manual work to decode.
Modification:
If the first bytes of the request are not the preface, the decoder
will now see if they are an HTTP/1 request first. If so, the error
message will include the method and path of the original request in
the error message.
In case the path is long, the decoder will check up to the first
1024 bytes to see if it matches. This could be a DoS vector if
tons of bad requests or other garbage come in. A future optimization
would be to treat the first few bytes as an AsciiString and not do
any Charset decoding. ByteBuf.toCharSequence alludes to such an
optimization.
The code has been left simple for the time being.
Result:
Faster identification of errant HTTP requests.
Motivation:
Tests were added in 91f050d2ef to run with different protocols / ciphers. These may fail currently when openssl was compiled without support for the protocol / ciphers.
Modifications:
- Refactor tests to easier understand for which protocol / cipher it failed
- Not fail the test if the protocol is not supported with the used openssl version.
Result:
More robust testing.
Motivation:
Disable ThreadLocal Cache, then allocate Pooled ByteBuf and release all these buffers, PoolArena's tiny/small/normal allocation count is incorrect.
Modifications:
- Calculate PoolArena's tiny/small/normal allocation one time
- Add testAllocationCounter TestCase
Result:
Fixes#6282 .
Motivation:
A testing goof in 7c630fe introduced a binary incompatibility when the old Promise-specific `add` and `addAll` methods in PromiseCombiner were generalized to accept `Futures`.
Modification:
- Restore (but mark as `@Deprecated`) old PromiseCombiner methods.
- Fixed a couple minor documentation typos because sure why not.
Result:
`PromiseCombiner` is binary-compatible with previous versions of Netty.
Motivation:
We failed to properly test if a protocol is supported on an OpenSSL installation and just always returned all protocols.
Modifications:
- Detect which protocols are supported on a platform.
- Skip protocols in tests when not supported. This fixes a build error on some platforms introduced by [#6276].
Result:
Correctly return only the supported protocols
Motivation:
We used ca 2k as maximum overhead for encrypted packets which is a lot more then what is needed in reality by OpenSSL. This could lead to the need of more memory.
Modification:
- Use a lower overhead of 86 bytes as defined by the spec and openssl itself
- Fix unit test to use the correct session to calculate needed buffer size
Result:
Less memory usage.
Motivation:
2fd42cfc6b fixed a bug related to encoding headers but it also introduced a throws statement onto the Http2FrameWriter methods which write headers. This throws statement makes the API more verbose and is not necessary because we can communicate the failure in the ChannelFuture that is returned by these methods.
Modifications:
- Remove throws from all Http2FrameWriter methods.
Result:
Http2FrameWriter APIs do not propagate checked exceptions.
Motivation:
We should call Epoll.ensureAvailability() when init EpollEventLoopGroup to fail fast and with a proper exception.
Modifications:
Call Epoll.ensureAvailability() during EpollEventLoopGroup init.
Result:
Fail fast if epoll is not availability (for whatever reason).
Motivation:
In PooledByteBuf we missed to null out the chunk and tmpNioBuf fields before recycle it to the Recycler. This could lead to keep objects longer alive then necessary which may hold a lot of memory.
Modifications:
Null out tmpNioBuf and chunk before recycle.
Result:
Possible to earlier GC objects.
Motivation:
At rfc7540 5.5, it said that it's not permitted to send extension
frame in the middle of header block and need be treated as
protocol error
Modifications:
When received a extension frame, in netty it's called unknown frame,
will verify that is there an headersContinuation exists
Result:
When received a extension frame in the middle of header block,
will throw connection error and closed the connection
Motivation:
We should test that we correctly return BUFFER_UNDERFLOW if the src buffer not contains enough data to unwrap it.
Modification:
Add unit test to verify behaviour.
Result:
Better test coverrage of SSLEngine implementations.
Motivation:
We need to update jetty-alpn-agent to support latest JDK releases.
Modifications:
Update jetty-alpn-agent to 2.0.6
Result:
Be able to run tests with latest JDK releases.
Motivation:
HttpUtil.setTransferEncodingChunked could add a second Transfer-Encoding
header if one was already present. While this is technically valid, it
does not appear to be the intent of the method.
Result:
Only one Transfer-Encoding header is present after calling this method.
Motivation:
When an empty hostname is used in DnsNameResolver.resolve*(...) it will never notify the future / promise. The root cause is that we not correctly guard against errors of IDN.toASCII(...) which will throw an IllegalArgumentException when it can not parse its input. That said we should also handle an empty hostname the same way as the JDK does and just use "localhost" when this happens.
Modifications:
- If the try to resolve an empty hostname we use localhost
- Correctly guard against errors raised by IDN.toASCII(...) so we will always noify the future / promise
- Add unit test.
Result:
DnsNameResolver.resolve*(...) will always notify the future.
Motivation:
SslHandler closed the channel as soon as it was able to write out the close_notify message. This may not be what the user want as it may make sense to only close it after the actual response to the close_notify was received in order to guarantee a clean-shutdown of the connection in all cases.
Beside this closeNotifyFlushTimeoutMillis is volatile so may change between two reads. We need to cache it in a local variable to ensure it not change int between. Beside this we also need to check if the flush promise was complete the schedule timeout as this may happened but we were not able to cancel the timeout yet. Otherwise we will produce an missleading log message.
Modifications:
- Add new setter / getter to SslHandler which allows to specify the behavior (old behavior is preserved as default)
- Added unit test.
- Cache volatile closeNotifyTimeoutMillis.
- Correctly check if flush promise was complete before we try to forcibly close the Channel and log a warning.
- Add missing javadocs.
Result:
More clean shutdown of connection possible when using SSL and fix racy way of schedule close_notify flush timeouts and javadocs.
This prevents us from having the first request, that hasn't ack'ed the setting causing a GOAWAY when we'd would
be under the maxHeaderListSizeGoAway that would have been set after the settings ack.
Motivation:
PR [#6238] added guards to be able to call wrap(...) / unwrap(...) after the engine was shutdown. Unfortunally one case was missed which is when closeOutbound() was called and produced some data while closeInbound() was not called yet.
Modifications:
Correctly guard against SSLException when closeOutbound() was called, produced data and someone calls wrap(...) after it.
Result:
No more SSLException. Fixes [#6260].
Motivation:
SslHandler has multiple methods which have better replacements now or are obsolete. We should mark these as `@Deprecated`.
Modifications:
Mark methods as deprecated.
Result:
API cleanup preparation.
Motivation:
In commit fc3c9c9523 I changes the way how we calculate the capacity of the needed ByteBuf for wrap operations that happen during writes when the SslHandler is used. This had the effect that the same capacity for ByteBufs is needed for the JDK implementation of SSLEngine but also for our SSLEngine implementation that uses OpenSSL / BoringSSL / LibreSSL. Unfortunally this had the side-effect that applications that used our SSLEngine implementation now need a lot more memory as bascially the JDK implementation always needs a 16kb buffer for each wrap while we can do a lot better for our SSLEngine implementation.
Modification:
- Resurrect code that calculate a better ByteBuf capacity when using our SSLEngine implementation and so be able to safe a lot of memory
- Add test-case to ensure it works as expected and is not removed again later on.
Result:
Memory footprint of applications that uses our SSLEngine implementation based on OpenSSL / BoringSSL / LibreSSL is back to the same amount of before commit fc3c9c9523.
Motivation:
Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.
This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.
Modifications:
I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.
Result:
A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
Motivation:
If the HPACK Decoder detects that SETTINGS_MAX_HEADER_LIST_SIZE has been violated it aborts immediately and sends a RST_STREAM frame for what ever stream caused the issue. Because HPACK is stateful this means that the HPACK state may become out of sync between peers, and the issue won't be detected until the next headers frame. We should make a best effort to keep processing to keep the HPACK state in sync with our peer, or completely close the connection.
If the HPACK Encoder is configured to verify SETTINGS_MAX_HEADER_LIST_SIZE it checks the limit and encodes at the same time. This may result in modifying the HPACK local state but not sending the headers to the peer if SETTINGS_MAX_HEADER_LIST_SIZE is violated. This will also lead to an inconsistency in HPACK state that will be flagged at some later time.
Modifications:
- HPACK Decoder now has 2 levels of limits related to SETTINGS_MAX_HEADER_LIST_SIZE. The first will attempt to keep processing data and send a RST_STREAM after all data is processed. The second will send a GO_AWAY and close the entire connection.
- When the HPACK Encoder enforces SETTINGS_MAX_HEADER_LIST_SIZE it should not modify the HPACK state until the size has been checked.
- https://tools.ietf.org/html/rfc7540#section-6.5.2 states that the initial value of SETTINGS_MAX_HEADER_LIST_SIZE is "unlimited". We currently use 8k as a limit. We should honor the specifications default value so we don't unintentionally close a connection before the remote peer is aware of the local settings.
- Remove unnecessary object allocation in DefaultHttp2HeadersDecoder and DefaultHttp2HeadersEncoder.
Result:
Fixes https://github.com/netty/netty/issues/6209.
Motivation:
When we follow CNAME records we should respect resolvedAddressTypes and only query A / AAAA depending on which address types are expected.
Modifications:
Check if we should query A / AAAA when follow CNAMEs depending on resolvedAddressTypes.
Result:
Correct behaviour when follow CNAMEs.
Motivation:
Replacing System.err during Slf4JLoggerFactory construction is problematic as another class may optain the System.err reference before we set it back to the original value.
Modifications:
Remove code that temporary replaced System.err.
Result:
Fixes [#6212].
Motivation:
As we use different execution path in our SSLEngine implementation depending on if heap, direct or mixed buffers are used we should run the tests with all of them.
Modification:
Ensure we run all tests with different buffer types.
Result:
Better test-coverage