Motivation:
Internally UnixResolverDnsServerAddressStreamProvider#parse calls FileInputStream.read(...)
when parsing the etcResolverFiles.
This will cause the error below when BlockHound is enabled
reactor.blockhound.BlockingOperationError: Blocking call! java.io.FileInputStream#readBytes
at java.io.FileInputStream.readBytes(FileInputStream.java)
at java.io.FileInputStream.read(FileInputStream.java:255)
Modifications:
- Add whitelist entry to BlockHound configuration
- Add test
Result:
Fixes#10925
Motivation:
We need to ensure we also build the modules we depend on as otherwise we may not be able to resolve the dependencies correctly
Modifications:
- Add -am when calling maven
Result:
Deployment works all the time
Motiviation:
We need to ensure we only register the methods for unix-native-common once as otherwise it may have strange side-effects.
Modifications:
- Add extra method that should be called to signal that we need to register the methods. The registration will only happen once.
- Adjust code to make use of it.
Result:
No more problems due incorrect registration of these methods.
Motivation:
As shown in the past we need to verify we actually can load the native as otherwise we may introduce regressions.
Modifications:
- Add new maven module which tests loading of native modules
- Add job that will also test loading on aarch64
Result:
Less likely to introduce regressions related to loading native code in the future
Motivation:
This reverts commit 7fb62a93b8 as it broke native loading in some cases due maven dependencies.
Modification:
Revert the commit.
Result:
Native loading works again
Motivation:
We need to ensure we always drain the error stack when a callback throws as otherwise we may pick up the error on a different SSL instance which uses the same thread.
Modifications:
- Correctly drain the error stack if native method throws
- Add a unit test which failed before the change
Result:
Always drain the error stack
Motivation:
New versions of `Bouncy Castle` libraries are out and we should upgrade to them.
Modification:
Upgraded all `Bouncy Castle` libraries to the latest version.
Result:
The latest versions of `Bouncy Castle` libraries.
Fixes#10905.
Motivation:
`mvn package` on Windows fails if there are some `dependency-reduced-pom.xml` files generated by the previous build.
`mvn clean package` does not help because it does not remove those files at the clean phase.
Modifications:
Use the correct file pattern for the suppression filters.
Result:
Following `mvn package` runs well on Windows.
Motivation:
Right now, we don't have to handle Push Promise Read in `Http2FrameCodec`. Push Promise is one of the key features of HTTP/2 and we should support it in our `Http2FrameCodec`.
Modification:
Added `Http2PushPromiseFrame` and `Http2PushPromiseFrame` to handle Push Promise and Promise Frame.
Result:
Fixes#10748
Motivation:
We can add some status badge and also should clarify requirements.
Modifictations:
- Add status badge
- Clarify requirements
Result:
Cleanup docs
Motivation:
Let us just only use one build config when building the 4.1 branch.
Modifications:
As we already do a full validation when doing the PR builds we can just only use one build config for pushes to the "main" branches
Result:
Faster build times
Motivation:
We need to take special care when deploying snapshots as we need to generate the jars in multiple steps
Modifications:
- Use the nexus staging pluging to stage jars locally in multiple steps
- Add extra job that will merge these staged jars and deploy these
Result:
Fixes https://github.com/netty/netty/issues/10887
Motivation:
A race detector discovered a data race in GlobalEventExecutor present in
netty 4.1.51.Final:
```
Write of size 4 at 0x0000cea08774 by thread T103:
#0 io.netty.util.internal.DefaultPriorityQueue.poll()Lio/netty/util/internal/PriorityQueueNode; DefaultPriorityQueue.java:113
#1 io.netty.util.internal.DefaultPriorityQueue.poll()Ljava/lang/Object; DefaultPriorityQueue.java:31
#2 java.util.AbstractQueue.remove()Ljava/lang/Object; AbstractQueue.java:113
#3 io.netty.util.concurrent.AbstractScheduledEventExecutor.pollScheduledTask(J)Ljava/lang/Runnable; AbstractScheduledEventExecutor.java:133
#4 io.netty.util.concurrent.GlobalEventExecutor.fetchFromScheduledTaskQueue()V GlobalEventExecutor.java:119
#5 io.netty.util.concurrent.GlobalEventExecutor.takeTask()Ljava/lang/Runnable; GlobalEventExecutor.java:106
#6 io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run()V GlobalEventExecutor.java:240
#7 io.netty.util.internal.ThreadExecutorMap$2.run()V ThreadExecutorMap.java:74
#8 io.netty.util.concurrent.FastThreadLocalRunnable.run()V FastThreadLocalRunnable.java:30
#9 java.lang.Thread.run()V Thread.java:835
#10 (Generated Stub) <null>
Previous read of size 4 at 0x0000cea08774 by thread T110:
#0 io.netty.util.internal.DefaultPriorityQueue.size()I DefaultPriorityQueue.java:46
#1 io.netty.util.concurrent.GlobalEventExecutor$TaskRunner.run()V GlobalEventExecutor.java:263
#2 io.netty.util.internal.ThreadExecutorMap$2.run()V ThreadExecutorMap.java:74
#3 io.netty.util.concurrent.FastThreadLocalRunnable.run()V FastThreadLocalRunnable.java:30
#4 java.lang.Thread.run()V Thread.java:835
#5 (Generated Stub) <null>
```
The race is legit, but benign. To trigger it requires a TaskRunner to
begin exiting and set 'started' to false, more work to be scheduled
which starts a new TaskRunner, that work then needs to schedule
additional work which modifies 'scheduledTaskQueue', and then the
original TaskRunner checks 'scheduledTaskQueue'. But there is no danger
to this race as it can only produce a false negative in the condition
which causes the code to CAS 'started' which is thread-safe.
Modifications:
Delete problematic references to scheduledTaskQueue. The only way
scheduledTaskQueue could be modified since the last check is if another
TaskRunner is running, in which case the current TaskRunner doesn't
care.
Result:
Data-race free code, and a bit less code to boot.
Motivation:
When validating PRs we should also at least run one job that uses boringssl
Modifications:
- Add job that uses boringssl
- Cleanup docker compose files
- Fix buffer leak in test
Result:
Also run with boringssl when PRs are validated
Motivation:
To prevent failures to problems while downloading dependencies we shoud cache these
Modifications:
Add maven cache
Result:
No more failures due problems while downloading dependencies
Motivation:
We need to find a way to deploy SNAPSHOTS for different arch with the same timestamp. Otherwise it will cause problems.
See https://github.com/netty/netty/issues/10887
Modification:
Skip all other deploys then x86_64
Result:
Users are able to use SNAPSHOTS for x86_6
Motivation:
The CONNACK message builder `ConnAckBuilder` doesn't provide a smooth way to assign the message properties. This PR try to provide an simpler way to create them, in a lazy way.
Modification:
This PR permit to store properties in the ConnAck message, collecting them and inserting during the build phase. The syntax this PR introduces is:
```java
MqttMessageBuilders.connAck().properties(new MqttMessageBuilders.PropertiesInitializer<MqttMessageBuilders.ConnAckPropertiesBuilder>() {
@Override
public void apply(MqttMessageBuilders.ConnAckPropertiesBuilder builder) {
builder.assignedClientId("client1234");
builder.userProperty("custom_property", "value");
}
}).build()
```
The name of the properties are defined in the `ConnAckPropertiesBuilder` so that is can be easily used by autocompletion tools.
Result:
This PR adds the builder class `ConnAckPropertiesBuilder`which is used by newly introduced method `properties` inside the message builder class `ConnAckBuilder`.
Motivation:
We should also deploy snapshots for our cross compiled native jars.
Modifications:
- Add job and docker files for deploying cross compiled native jars
- Ensure we map the maven cache into our docker containers
Result:
Deploy aarch64 jars and re-use cache
Motivation:
Android seems to use a different field name so we should also try to access it with the name used by android.
Modifications:
Try first fd and if this fails try descriptor as field name
Result:
Workaround for android.
Motivation:
switch is used when we have a good amount of cases because switch is faster than if-else. However, we're using only 1 case in switch which can affect performance.
Modification:
Changed switch to if.
Result:
Good code.
Motivation:
We should just use GitHub Actions for the CI
Modifications:
- Adjust docker / docker compose files
- Add different workflows and jobs to deploy and build the project
Result:
Don't depend on external CI services
Motivation:
HPACK static table is organized in a way that fields with the same
name are sequential. Which means when doing sequential scan we can
short-circuit scan on name mismatch.
Modifications:
* `HpackStaticTable.getIndexIndensitive` returns -1 on name mismatch
rather than keep scanning.
* `HpackStaticTable` statically defined max position in the array
where name duplication is possible (after the given index there's
no need to check for other fields with the same name)
* Benchmark for different lookup patterns
Result:
Better HPACK static table lookup performance.
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
netty-jni-util is now also hosted on maven central. Let's use it
Modifications:
Adjust plugins to just unpack netty-jni-util and use it
Result:
Be able to use what is in the maven cache for netty-jni-util
Motivation:
If the MQTT client specifies Subscribe Options parameters only available in MQTT v5 and tries to encode the message as MQTT v3 then an invalid QoS value is encoded
Modification:
Check MQTT version when encoding SUBSCRIBE message options, if it's 3.1 or 3.1.1 - only encode QoS, skip other options.
Result:
MqttEncoder produces a valid SUBSCRIBE message even if the client has specified options not available in the current MQTT version.
Motivation:
We need to ensure we only register native methods once as otherwise we may end up in an "invalid" state. The problem here was that before it was basically the responsibility the user of transport-native-unix-common to register the methods. This is error prone as there may be multiple users of these on the classpath at the same time.
Modifications:
- Provide a way to init native lib without register the native methods of the provided classes. This is needed to be able to re-use functionality which is exposed to our internal native code
- Use flatten plugin to correctly resolve classifier and so have the correct dependency
- Call Unix.* method to ensure we register the methods correctly once
- Include native lib as well in the native jars of unix-common
Result:
Be able to have multiple artifacts of the classpath that depends on the unix-common. Related to https://github.com/netty/netty-incubator-transport-io_uring/issues/15
Motivation:
I did not see any tangible advantage to the padding.
The only other field that was guarded was a rarely changed object reference to a BitSet.
Without the padding, there is also no longer any use of the inheritance hierarchy.
The padding was also using `long`, which would not necessarily prevent the JVM from fitting the aforementioned object reference in an alignment gap.
Modification:
Move all the fields into the InternalThreadLocalMap and deprecate the stuff we'd like to remove.
Result:
Simpler code.
This resolves the discussion in https://github.com/netty/netty/issues/9284
Motivation:
https://github.com/netty/netty/pull/10814 did fix a bug where we did try to call memoryAddress() even tho this is not supported. Unfortunally this fix was only applied for one method and so we missed another method which then could throw an exception when we called memoryAddress()
Modifications:
- Also fix the memoryAddress(offset) method.
_ Adjust unit test to also test this.
Result:
Fixes https://github.com/netty/netty/issues/10813 completely.
Motivation:
We shouldnt catch Throwable in InternalLoggerFactory as this may hide OOME etc.
Modifications:
Only catch LinkageError and Exception
Result:
Fixes https://github.com/netty/netty/issues/10857
Motivation:
When using the JDKs SSLEngineImpl with TLSv1.3 it sometimes returns HandshakeResult.FINISHED multiple times. This can lead to have SslHandshakeCompletionEvents to be fired multiple times.
Modifications:
- Keep track of if we notified before and if so not do so again if we use TLSv1.3
- Add unit test
Result:
Consistent usage of events