Motivation:
Realization of `AbstractTrafficShapingHandler.userDefinedWritabilityIndex()` has references to subclasses.
In addition, one of the subclasses overriding it, but the other does not.
Modifications:
Add overriding to the second subclass. Remove references to subclasses from parent class.
Result:
More consistent and clean code (OOP-stylish).
Motivation:
325cc84a2e introduced new tests which uses classes only provided by Java8+. We need to ensure we only try to load classes needed for these when we run the tests on Java8+ so we still can run the testsuite with Java7.
Modifications:
Add extra class which only gets loaded when Java8+ is used and move code there.
Result:
No more class-loader issue when running tests with Java7.
Motivation:
As we provide our own SSLEngine implementation we should have benchmarks to compare it against JDK impl.
Modifications:
Add benchmarks for wrap / unwrap and handshake performance.
Result:
Benchmarks FTW.
Motivation:
The timeouts used in the Http2ConnectionRoundtripTest seems to be too low when leak-detection is enabled so we sometimes get failed tests due timeout.
Modifications:
Increase timeouts.
Result:
Fixes [#6442].
Motivation:
DatagramPacketEncoder|Decoder should respect if the wrapped handler is sharable or not and depending on that be sharable or not.
Modifications:
- Delegate isSharable() to wrapped handler
- Add test-cases
Result:
Correct behavior
Motivation:
We not support all SSLParameters settings so we should better throw if a user try to use them.
Modifications:
- Check for unsupported parameters
- Add unit test
Result:
Less surprising behavior.
Motivation:
Allmost all our benchmarks are in src/main/java but a few are in src/test/java. We should make it consistent.
Modifications:
Move everything to src/main/java
Result:
Consistent code base.
Motivation:
Commit 8dda984afe introduced a regression which lead to the situation that the allocator is not set when PooledByteBuf.initUnpooled(...) is called. Thus it was possible that PooledByteBuf.alloc() returns null or the wrong allocator if multiple PooledByteBufAllocator are used in an application.
Modifications:
- Correctly set the allocator
- Add test-case
Result:
Fixes [#6436].
Motivation:
Calling a static method is faster then dynamic
Modifications:
Add 'static' keyword for methods where it missed
Result:
A bit faster method calls
Motivation:
We shipped a javassist based implementation for typematching and logged a confusing debug message about missing javassist. We never were able to prove it really gives any perf improvements so we should just remove it.
Modifications:
- Remove javassist dependency and impl
- Fix possible classloader deadlock as reported by intellij
Result:
Less code to maintain and less confusing log message.
Motivation:
Base64#decode4to3 generally calculates an int value where the contents of the decodabet straddle bytes, and then uses a byte shifting or a full byte swapping operation to get the resulting contents. We can directly calculate the contents and avoid any intermediate int values and full byte swap operations. This will reduce the number of operations required during the decode operation.
Modifications:
- remove the intermediate int in the Base64#decond4to3 method.
- manually do the byte shifting since we are already doing bit/byte manipulations here anyways.
Result:
Base64#decode4to3 requires less operations to compute the end result.
Motivation:
The decode and encode method uses getByte(...) and setByte(...) in loops which can be very expensive because of bounds / reference-count checking. Beside this it also slows-down a lot when paranoid leak-detection is enabled as it will track each access.
Modifications:
- Pack bytes into int / short and so reduce operations on the ByteBuf
- Use ByteProcessor to reduce getByte calls.
Result:
Better performance in general. Also when you run the build with -Pleak the handler module will build in 1/4 of the time it took before.
Motivation:
We should move the AutobahnTestsuite to an extra module. This allows easier to run only the testsuite or only the autobahntestsuite
Modifications:
Create a new module (testsuite-autobahn)
Result:
Better project structure.
Motivation:
DefaultHttp2FrameWriter contains important logic for how http2
frame encode into binary, currently, netty had no unit test
just for DefaultHttp2FrameWriter.
Modifications:
Write DefaultHttp2FrameWriterTest to test DefaultHttp2FrameWriter
Result:
The coverage of DefaultHttp2FrameWriter is increased, and the
class was more reliable
Motivation:
In previous PR about handling unknwon frame in the middle of header
block, I didn't notice and re-use about checking is processing header
. And I added a redundant method for same functionality.
I think that the redundant method would lead to some misleading
situation.
Modifications:
Removed redundant code on DefaultHttp2FrameReader
Result:
The code is more readable
Motivation:
Some unit HTTP/2 unit tests use LocalChannel. LocalChannel's doClose method will ensure any pending items in the queue will be released, but it may execute a Runnable on the peer's EventLoop to ensure the peer's queue is also cleaned up. The HTTP/2 unit tests close the event loop groups with no wait time so that unit tests will execute quickly, but if the doClose Runnable is in the EventLoop's queue it will not run and thus the items in the queue will not be released.
Modifications:
- Ensure all HTTP/2 unit tests which use LocalChannel wait for both client and server channels to be closed before closing the EventLoop.
Result:
Related to https://github.com/netty/netty/issues/5850.
Motivation:
d8e6fbb9c3 attempted to account for the JDK not throwing the expected SSLHandshakeException by allowing a SSLException to also pass the test. However in some situations the SSLException will not be the top level exception and the Throwable must be unwrapped to see if the root cause is an SSLException.
Modifications:
- Unwrap exceptions thrown by the JDK's SSLEngine to check for SSLException.
Result:
SSLEngineTest (and derived classes) are more reliable.
Motivation:
autobahntestsuite-maven-plugin 0.1.4 was released and supports Java9.
Modifications:
Update plugin to be able to run tests on Java9
Result:
Autobahntestsuite can also be run on Java9.
Motivation:
As netty-tcnative can be build against different native libraries and versions we should log the used one.
Modifications:
Log the used native library after netty-tcnative was loaded.
Result:
Easier to understand what native SSL library was used.
Motivation:
codec-redis and codec-xml are release modules and should be included in netty-all.
Modifications:
Add codec-redis and codec-xml modules to netty-all pom.
Result:
codec-redis and codec-xml can be used with the netty-all artifact.
Motivation:
When "Too many open files" happens,the URLClassLoader cannot do any classloading because URLClassLoader need a FD for findClass. Because of this the anonymous inner class that is created to re-enable auto read may cause a problem.
Modification:
Pre-create Runnable that is scheduled and so ensure it is not lazy loaded.
Result:
No more problems when try to recover.
Motivation:
OpenSSL doesn't automatically verify hostnames and requires extract method calls to enable this feature [1]. We should allow this to be configured.
Modifications:
- SSLParamaters#getEndpointIdentificationAlgorithm() should be respected and configured via tcnative interfaces.
Result:
OpenSslEngine respects hostname verification.
[1] https://wiki.openssl.org/index.php/Hostname_validation
Motivation:
We should log why we can not use ByteBuffer.cleaner and so maybe allow the user to fix it.
Modifications:
- Use Unsafe to access the field
- Log the exception when we can not use ByteBuffer.cleaner
Result:
Easier to debug why using cleaner is not possible.
Motivation:
ThreadLocalInsecureRandom still referenced ThreadLocalRandom directly, but shouldn't.
Modifications:
ThreadLocalInsecureRandom should reference PlatformDependent#threadLocalRandom() in comments
Result:
Less usage of internal.ThreadLocalRandom.
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
SelectedSelectionKeySet currently uses 2 arrays internally and users are expected to call flip() to access the underlying array and switch the active array. However we do not concurrently use 2 arrays at the same time and we can get away with using a single array if we are careful about when we reset the elements of the array.
Modifications:
- Introduce SelectedSelectionKeySetSelector which wraps a Selector and ensures we reset the underlying SelectedSelectionKeySet data structures before we select
- The loop bounds in NioEventLoop#processSelectedKeysOptimized can be defined more precisely because we know the real size of the underlying array
Result:
Fixes https://github.com/netty/netty/issues/6058
Motivation:
Java9 does not allow changing access level via reflection by default. This lead to the situation that netty disabled Unsafe completely as ByteBuffer.address could not be read.
Modification:
Use Unsafe to read the address field as this works on all Java versions.
Result:
Again be able to use Unsafe optimisations when using Netty with Java9
Motivation:
We missed some stuff in 5728e0eb2c and so the build failed on java9
Modifications:
- Add extra cmdline args when needed
- skip the autobahntestsuite as jython not works with java9
- skip the osgi testsuite as the maven plugin not works with java9
Result:
Build finally passed on java9
Motivation:
Commit 591293bfb4 changed the build to need java8 but missed to adjust the enforce rule as well.
Modifications:
Enforce java8+
Result:
Quickly fail when user tries to compile with pre java8
Motivation:
EpollDatagramChannel uses getOption in the isActive method. getOption is backed by a relatively large conditional if/else if block and this conditional checking can be avoided in the epoll transport.
Modifications:
- Add EpollDatagramChannelConfig#getActiveOnOpen and use this in EpollDatagramChannel
Result:
Conditional checking due to getOption is removed from EpollDatagramChannel.
Motivation:
When the EPOLLRDHUP event is received we assume that the read side of the FD is no longer functional and force the input state to be shutdown. However if the channel is still active we should rely upon EPOLLIN and read to indicate there is no more data before we update the shutdown state. If we do not do this we may not read all pending data in the FD if the RecvByteBufAllocator doesn't want to consume it all in a single read operation.
Modifications:
- AbstractEpollChannel#epollRdHupReady() shouldn't force shutdown the input if the channel is active
Result:
All data can be read even if the RecvByteBufAllocator doesn't read it all in the current read loop.
Fixes https://github.com/netty/netty/issues/6303
Motivation:
The allowMaskMismatch parameter used throughout websocketx allows frames
with noncompliant masks when set to true, not false.
Modification:
Changed the javadoc comment everywhere it appears.
Result:
Fixes#6387
Motivation:
NetworkInterface.getNetworkInterfaces() may return null if no network interfaces are found. We should guard against it.
Modifications:
Check for null return value.
Result:
Fixes [#6384]
Motivation:
When sun.misc.Unsafe is present we want to use *Unsafe*ByteBuf implementations. We missed to do so in PooledByteBufAllocator when the heapArena is null.
Modifications:
- Correctly use UnpooledUnsafeHeapByteBuf
- Add unit tests
Result:
Use most optimal ByteBuf implementation.
Motivation:
Commit cd3bf3df58 made netty observe the latest version of netty-tcnative which changed the way how static fields are computed for various SSL.* values. This lead to have SSL_OP_NO_SSLv2 become 0 when using boringssl as boringssl not supports SSLv2 at all. In the logic of ReferenceCountedOpenSslEngine.getEnabledProtocols() we not expect to have a zero value and so our logic fails.
Modifications:
Check we actual support the protocol before return it as enabled.
Result:
SSLEngineTest.testEnablingAnAlreadyDisabledSslProtocol passes again with boringssl
Motivation:
If an event occurs which generates non-application data (shutdown, handshake failure, alert generation, etc...) and the non-application buffer in the ByteBuffer BIO is full (or sufficiently small) we may not propagate all data to our peer before tearing down the socket.
Modifications:
- when wrap() detects the outbound is closed, but there is more data pending in the non-application buffers, we must also check if OpenSSL will generate more data from calling SSL_shutdown again
- when wrap() detects a handshakeExcpetion during failure we should check if OpenSSL has any pending data (in addition to the non-application buff) before throwing the handshake exception
Result:
OpenSslEngine more reliably transmits data to the peer before closing the socket.
Motiviation:
Simplify implementation of compareTo/equals/hashCode for ChannelIds.
Modifications:
We simplfy the hashCode implementation for DefaultChannelId by not
making it random, but making it based on the underlying data. We fix the
compareTo implementation for DefaultChannelId by using lexicographic
comparison of the underlying data array. We fix the compareTo
implementation for CustomChannelId to avoid the possibility of overflow.
Result:
Cleaner code that is easier to maintain.
Motivation:
Java8 is out now for some time and JDK7 is no longer supported officially. We should remove all our backports and just use what the JDK provides us. This also will allow us to use intrinsics that are offered by the JDK implementations.
Modifications:
Remove all backports of jdk8 classes.
Result:
Use what the JDK offers us. This also fixes [#5458]
Motivation:
JdkOpenSslEngineInteroptTest.mySetupMutualAuthServerIsValidClientException(...) delegated to the wrong super method.
Modifications:
Fix delegate
Result:
Correct test-code.
Motivation:
Some version of openssl dont support the needed APIs to use a KeyManagerFactory. In this case we should skip the tests.
Modifications:
- Use assumeTrue(...) to skip tests that need a KeyManagerFactory and its not supported.
Result:
Tests pass on all openssl versions we support.
Motivation:
Today, the HTTP codec in Netty responds to HTTP/1.1 requests containing
an "expect: 100-continue" header and a content-length that exceeds the
max content length for the server with a 417 status (Expectation
Failed). This is a violation of the HTTP specification. The purpose of
this commit is to address this situation by modifying the HTTP codec to
respond in this situation with a 413 status (Request Entity Too
Large). Additionally, the HTTP codec ignores expectations in the expect
header that are currently unsupported. This commit also addresses this
situation by responding with a 417 status.
Handling the expect header is tricky business as the specification (RFC
2616) is more complicated than it needs to be. The specification defines
the legitimate values for this header as "100-continue" and defines the
notion of expectatation extensions. Further, the specification defines a
417 status (Expectation Failed) and this is where implementations go
astray. The intent of the specification was for servers to respond with
417 status when they do not support the expectation in the expect
header.
The key sentence from the specification follows:
The server MUST respond with a 417 (Expectation Failed) status if
any of the expectations cannot be met or, if there are other
problems with the request, some other 4xx status.
That is, a server should respond with a 417 status if and only if there
is an expectation that the server does not support (whether it be
100-continue, or another expectation extension), and should respond with
another 4xx status code if the expectation is supported but there is
something else wrong with the request.
Modifications:
This commit modifies the HTTP codec by changing the handling for the
expect header in the HTTP object aggregator. In particular, the codec
will now respond with 417 status if any expectation other than
100-continue is present in the expect header, the codec will respond
with 413 status if the 100-continue expectation is present in the expect
header and the content-length is larger than the max content length for
the aggregator, and otherwise the codec will respond with 100 status.
Result:
The HTTP codec can now be used to correctly reply to clients that send a
100-continue expectation with a content-length that is too large for the
server with a 413 status, and servers that use the HTTP codec will now
no longer ignore expectations that are not supported (any value other
than 100-continue).