Motivation:
OpenJDK15 was released, we should compile with it on the CI
Modifications:
Add docker-compose files to be able to compile with OpenJDK15
Result:
Compile with latest major JDK version
Motivation:
As the PooledByteBufAllocator is a critical part of netty we should ensure it works as expected.
Modifications:
- Add a few more asserts to ensure we not see any corrupted state
- Null out slot in the subpage array once the subpage was freed and removed from the pool
- Merge methods into constructor as it was only called from the constructor anyway.
Result:
Code cleanup
Motivation:
Users may want to do special actions when onComplete(...) was called and depend on these once they receive the SniCompletionEvent
Modifications:
Switch order and so call onLookupComplete(...) before we fire the event
Result:
Fixes https://github.com/netty/netty/issues/10655
Motivation:
We have a few classes in which we store and reuse static instances of various exceptions. When doing so it is important to also override fillInStacktrace() and so prevent the leak of the ClassLoader in the internal backtrace field.
Modifications:
- Add overrides of fillInStracktrace when needed
- Move ThrowableUtil usage in the static methods
Result:
Fixes https://github.com/netty/netty/pull/10686
Motivation:
The process of reporting security issues should be documented
and easy to find.
Modification:
Added a SECURITY.md file that describes how to report a security issue.
Result:
It's a bit easier to find the docs that describe how security issues
should be reported. Also, when someone creates an issue the repository,
they will see a link to the security policy.
Motivation:
Fix Broken Link of OWASP HttpOnly Cookie in Cookie class.
Modification:
Updated the broken link.
Result:
Broken Link Fix for better Documentation.
Motivation:
We should use ObjectUtil for checking if Compression parameters are in range. This will reduce LOC and make code more readable.
Modification:
Used ObjectUtil
Result:
More readable code
Motivation:
All scheduled executors should behave in accordance to their API.
The bug here is that scheduled tasks were not run more than once because we executed the runnables directly, instead of through the provided runnable future.
Modification:
We now run tasks through the provided future, so that when each run completes, the internal state of the task is reset and the ScheduledThreadPoolExecutor is informed of the completion.
This allows the executor to prepare the next run.
Result:
The UnorderedThreadPoolEventExecutor is now able to run scheduled tasks more than once.
Which is what one would expect from the API.
Motivation:
Java 16 will come around eventually anyway, and this makes it easier for people to experiment with Early Access builds.
Modification:
Added Maven profiles for JDK 16 to relevant pom files.
Result:
Netty now builds on JDK 16 pre-releases (provided they've not broken compatibility in some way).
Motivation:
We check lots of numbers if it lies in a range. So it's better to add a method in `ObjectUtil` to check if a number lies inside a range.
Modification:
Added Range check method.
Result:
A faster and better way to check if a number lies inside a range.
Motivation:
We should simplify and remove useless bit operations to make code more efficient, faster, and easier to understand.
Modification:
Simplified and Removed Useless Bit Operations
Result:
Simpler Code.
Motivation:
Avoid implicit conversions to narrower types in
AbstractMemoryHttpData and Bzip2HuffmanStageEncoder classes
reported by LGTM.
Modifications:
Updated the classes to avoid implicit casting to narrower types.
It doesn't look like that an integer overflow is possible there,
therefore no checks for overflows were added.
Result:
No warnings about implicit conversions to narrower types.
Motivation:
LGTM reported that WebSocketUtil uses MD5 and SHA-1
that are considered weak. Although those algorithms
are insecure, they are required by draft-ietf-hybi-thewebsocketprotocol-00
specification that is implemented in the corresponding WebSocket
handshakers. Once the handshakers are removed, WebSocketUtil can be
updated to stop using those weak hash functions.
Modifications:
Added SuppressWarnings annotations.
Result:
Suppressed warnings.
Motivation:
- To make ensureWritable throw IOOBE when maxCapacity is exceeded, even if
the requested new capacity would overflow Integer.MAX_VALUE
Modification:
- AbstractByteBuf.ensureWritable0 is modified to detect when
targetCapacity has wrapped around
- Test added for correct behaviour in AbstractByteBufTest
Result:
- Calls to ensureWritable will always throw IOOBE when maxCapacity is
exceeded (and bounds checking is enabled)
Motivation:
`Http2MultiplexHandler` have to 2 empty lines at the end instead of 1.
Modification:
Removed 1 extra line.
Result:
Little better code style.
Add validation check about websocket path
Motivation:
I add websocket handler in custom server with netty.
I first add WebSocketServerProtocolHandler in my channel pipeline.
It does work! but I found that it can pass "/websocketabc". (websocketPath is "/websocket")
Modification:
`isWebSocketPath()` method of `WebSocketServerProtocolHandshakeHandler` now checks that "startsWith" applies to the first URL path component, rather than the URL as a string.
Result:
Requests to "/websocketabc" are no longer passed to handlers for requests that starts-with "/websocket".
Motivation:
We should implement the Closeable method to properly close `OutputStream` and `PcapWriteHandler`. So whenever `handlerRemoved(ChannelHandlerContext)` is called or the user wants to stop the Pcap writes into `OutputStream`, we have a proper method to close it otherwise writing data to close `OutputStream` will result in `IOException`.
Modification:
Implemented `Closeable` in `PcapWriteHandler` which calls `PcapWriter#close` and closes `OutputStream` and stops Pcap writes.
Result:
Better handling of Pcap writes.
Motivation:
We can use ObjectUtil#checkPositive instead of manually checking maxContentLength. Also, there was a missing JavaDoc for Http2Exception,
Modification:
Used ObjectUtil#checkPositive for checking maxContentLength.
Added missing JavaDoc.
Result:
More readable code.
Motivation:
DefaultAttributeMap::attr has a blocking behaviour on lookup of an existing attribute:
it can be made non-blocking.
Modification:
Replace the existing fixed bucket table using a locked intrusive linked list
with an hand-rolled copy-on-write ordered single array
Result:
Non blocking behaviour for the lookup happy path
Motivation:
We need to take the Provider into account as well when trying to detect if TLSv1.3 is used by default / supported
Modifications:
- Change utility method to respect provider as well
- Change testcode
Result:
Less error-prone tests
Motivation:
Some JDKs dissallow the usage of keysizes < 2048, so we should not use such small keysizes in tests.
This showed up on fedora 32:
```
Caused by: java.security.cert.CertPathValidatorException: Algorithm constraints check failed on keysize limits. RSA 1024bit key used with certificate: CN=tlsclient. Usage was tls client
at sun.security.util.DisabledAlgorithmConstraints$KeySizeConstraint.permits(DisabledAlgorithmConstraints.java:817)
at sun.security.util.DisabledAlgorithmConstraints$Constraints.permits(DisabledAlgorithmConstraints.java:419)
at sun.security.util.DisabledAlgorithmConstraints.permits(DisabledAlgorithmConstraints.java:167)
at sun.security.provider.certpath.AlgorithmChecker.check(AlgorithmChecker.java:326)
at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:125)
... 23 more
```
Modifications:
Replace hardcoded keys / certs with SelfSignedCertificate
Result:
No test-failures related to small key sizes anymore.
Motivation:
We should stop as soon as we were able to set the key material on the server side as otherwise we may select keymaterial that "belongs" to a less prefered cipher. Beside this it also is just useless work.
We also need to propagate the exception when it happens during key material selection on the client side so openssl will produce the right alert.
Modifications:
- Stop once we were able to select a key material on the server side
- Ensure we not call choose*Alias more often then needed
- Propagate exceptions during selection of the keymaterial on the client side.
Result:
Less overhead and more correct behaviour
Motivation:
We need to let openssl know that we failed to find the key material so it will produce an alert for the remote peer to consume. Beside this we also need to ensure we wrap(...) until we produced everything as otherwise the remote peer may see partial data when an alert is produced in multiple steps.
Modifications:
- Correctly throw if we could not find the keymaterial
- wrap until we produced everything
- Add test
Result:
Correctly handle the case when key material could not be found
Motivation:
Calling chooseServerAlias(...) may be expensive so we should ensure we not call it multiple times for the same auth methods.
Modifications:
Remove duplicated from authMethods before trying to call chooseServerAlias(...)
Result:
Less performance overhead during key material selection
Motivation:
Following Javadoc standard
Modification:
Change from `@param KeyManager` to `@param keyManager`
Result:
The `@param` matches the actual parameter variable name
Motivation:
We should use an initial buffer size with is >= 1500 (which is a common setting for MTU) to reduce the need for memory copies when a new connection is established. This is especially interesting when SSL / TLS comes into the mix.
This was ported from swiftnio:
https://github.com/apple/swift-nio/pull/1641
Modifications:
Increase the initial size from 1024 to 2048.
Result:
Possible less memory copies on new connections
Motivation:
Reason code for MQTT SUBACK messages is truncated, thus not allowing to get new codes supported by MQTT v.5
Modification:
Changed `MqttCodecTest.testUnsubAckMessageForMqtt5` to catch it then, removed truncation in `MqttDecoder.decodeSubackPayload` to make it pass.
Result:
Codec users can get all reason codes supported by MQTT5 now.
Motivation:
Creating exceptions is expensive so we should only do so if really needed.
Modifications:
Only create the ConnectTimeoutException if we really need it.
Result:
Less overhead
Motivation:
Consider a scenario when the client iniitiates a WebSocket handshake but before the handshake is complete,
the channel is closed due to some reason. In such scenario, the handshake timeout scheduled on the executor
is not cleared. The reason it is not cleared is because in such cases the handshakePromise is not completed.
Modifications:
This change completes the handshakePromise exceptinoally on channelInactive callback, if it has not been
completed so far. This triggers the callback on completion of the promise which clears the timeout scheduled
on the executor.
This PR also adds a test case which reproduces the scenario described above. The test case fails before the
fix is added and succeeds when the fix is applied.
Result:
After this change, the timeout scheduled on the executor will be cleared, thus freeing up thread resources.
Motivation:
There was no way to read MQTT properties outside of `io.netty.handler.codec.mqtt` package
Modification:
Expose `MqttProperties.MqttProperty` fields `value` and `propertyId` via public methods
Result:
It's possible to read MQTT properties now
Motivation:
If DeleteOnExitHook is in the open state and the program runs for a long time, the DeleteOnExitHook file keeps increasing.
This results in a memory leak
Modification:
I re-customized a DeleteOnExitHook hook. If DeleteOnExitHook is turned on, this hook will be added when creating a temporary file. After the request ends and the corresponding resources are released, the current file will be removed from this hook, so that it will not increase all the time.
Result:
Fixes https://github.com/netty/netty/issues/10351
Bumps ant from 1.9.7 to 1.9.15.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Recent changes for MQTT5 may cause NPE if UNSUBACK message is created with `MqttMessageFactory`
and client code isn't up-to-date.
Modifications:
Added default body in `MqttUnsubAckMessage` constructor if null body is passed,
added null checks in `encodeUnsubAckMessage`
Result:
`MqttUnsubAckMessage` created with `MqttMessageFactory` doesn't cause NPE
even if null body is supplied.
Motivation:
Write TCP and UDP packets into Pcap `OutputStream` which helps a lot in debugging.
Modification:
Added TCP and UDP Pcap writer.
Result:
New handler can write packets into an `OutputStream`, e.g. a file that can be opened with Wireshark.
Fixes#10385.
Motivation:
Impossible to specify retained messages handling policy because the corresponding enum
isn't public (https://github.com/netty/netty/issues/10562)
Modification:
Made `RetainedHandlingPolicy` public
Result:
Fixes#10562.
Motivation:
The builds often fail when downloading dependencies.
This might be caused by the build taking a long time, and cause pooled connections to be closed
by the remote end, if they are idle for too long.
Modification:
Disable connection pooling. This should force Maven to reestablish the connection for each download,
thus reducing the likelihood of the remote end closing connections we wish to use.
Result:
I'll leave it up the statistis of our CI to confirm, but we should see more stable builds.
Motivation:
writeUtf8 can suffer from inlining issues and/or megamorphic call-sites on the hot path due to ByteBuf hierarchy
Modifications:
Duplicate and specialize the code paths to reduce the need of polymorphic calls
Result:
Performance are more stable in user code
Motivation:
When we try to parse the kernel version we need to be careful what to
expect. Especially when a custom kernel is used we may get extra chars
in the version numbers. For example I had this one fail because of my
custom kernel that I built for io_uring:
5.8.7ioring-fixes+
Modifications:
- Try to be a bit more lenient when parsing
- If we cant parse the kernel version just use 0.0.0
Result:
Tests are more robust
only
Motivation:
4b7dba1 introduced a change which was not 100 % complete and so
introduce a regression when a user specified to use
InetProtocolFamily.IPv4 and trying to bind to a port (without specify
the ip).
Modifications:
- Fix regression by respect the InetProtocolFamily
- Add unit test
Result:
Fix regression when binding to port explicit
Reduce garbage on MQTT encoding
Motivation:
MQTT encoding and decoding is doing unnecessary object allocation in a number of places:
- MqttEncoder create many byte[] to encode Strings into UTF-8 bytes
- MqttProperties uses Integer keys instead of int
- Some enums valueOf create unnecessary arrays on the hot paths
- MqttDecoder was using unecessary Result<T>
Modification:
- ByteBufUtil::utf8Bytes and ByteBufUtil::reserveAndWriteUtf8 allows to perform the same operation GC-free
- MqttProperties uses a primitive key map
- Implemented GC free const table lookup/switch valueOf
- Use some bit-tricks to pack 2 ints into a single primitive long to store both result and numberOfBytesConsumed and use byte[].length to compute numberOfByteConsumed on fly. These changes allowed to save creating Result<T>.
Result:
Significantly less garbage produced in MQTT encoding/decoding
Motivation:
If ByteBufUtil.getBytes() is called with copy=false, it does not
correctly check that the underlying array can be shared in some cases.
In particular:
* It does not check that the arrayOffset() is zero. This causes it to
incorrectly return the underlying array if the other conditions are
met. The returned array will be longer than requested, with additional
unwanted bytes at its start.
* It assumes that the capacity() of the ByteBuf is equal to the backing
array length. This is not true for some types of ByteBuf, such as
PooledHeapByteBuf. This causes it to incorrectly return the underlying
array if the other conditions are met. The returned array will be
longer than requested, with additional unwanted bytes at its end.
Modifications:
This commit fixes the two bugs by:
* Checking that the arrayOffset() is zero before returning the
underlying array.
* Comparing the requested length to the underlying array's length,
rather than the ByteBuf's capacity, before returning the underlying
array.
This commit also adds a series of test cases for ByteBufUtil.getBytes().
Result:
ByteBufUtil.getBytes() now correctly checks whether the underlying array
can be shared or not.
The test cases will ensure the bug is not reintroduced in the future.
Motivation:
This reverts commit 825916c7f0 as it turns out it introduced a big performance regression.
Modifications:
Revert 825916c7f0
Result:
Performance of TLS is back to normal
Motivation:
be able to modify Http2Settings of Http2ConnectionHandlerBuilder.
Modifications:
make initialSettings() of Http2ConnectionHandlerBuilder public.
Result:
public initialSettings() of Http2ConnectionHandlerBuilder allow Http2Settings modification.
Motivation:
In some benchmarks closing the Channel attributes to a lot of overhead due the call of fillInStackTrace(). We should reduce this overhead.
Modifications:
- Create a StacklessClosedChannelException and use it to reduce overhead.
- Only call ChannelOutboundBuffer.failFlushed(...) when there was a flushed message at all.
Result:
Less performance overhead when closing the Channel