Motivation:
There is an intrinsic race between a local session resetting a stream
and the peer no longer sending any frames. This can result in the
session receiving frames for a stream that the local peer no longer
tracks. This results in a StreamException being thrown which triggers a
RST_STREAM frame, which is a good thing, but also logging at level WARN,
which is noisy for an expected and benign condition.
Modification:
Change the log level to DEBUG when logging stream errors with code
STREAM_CLOSED. All others are more interesting and will continue to be
logged at level WARN.
Additionally, it was found that DATA frames for streams that could not
have existed only resulted in a StreamException when the spec is clear
that such a situation should be fatal to the connection, resulting in a
GOAWAY(PROTOCOL_ERROR).
Fixes#8025.
Motivation:
394a1b3485 introduced a hard dependency on GLIBC 2.12 which was not the case before. This had the effect of not be able to use the native epoll transports on platforms which ship with earlier versions of GLIBC.
To make things a backward compatible as possible we should not introduce such changes in a bugfix release.
Special thanks to @weissi with all the help to fix this.
Modifications:
- Use syscalls directly to remove dependency on GLIBC 2.12
- Make code consistent that needs newer GLIBC versions
- Adjust scattering read test to only run if recvmmsg syscall is supported
- Cleanup pom.xml as some stuff is not needed anymore after using syscalls.
Result:
Fixes https://github.com/netty/netty/issues/9758.
Motivation
AbstractChannelHandlerContext uses recyclable tasks when performing
writes from outside of the event loop. There's currently two distinct
classes WriteTask and WriteAndFlushTask used for executing writes versus
writeAndFlushes, and these are recycled in separate pools. However it is
straightforward to just have a single class / recycler pool, with a
flush flag.
Modifications
- Unify WriteTasks into a single class using the sign bit of the
existing size field to indicate whether a flush should be performed
- Use the new executor lazyExecute() method to lazily execute the
non-flush write tasks explicitly
- Change AbstractChannelHandlerContext#invokeWrite and
AbstractChannelHandlerContext#invokeWriteAndFlush from private to
package-private to avoid synthetic methods
- Correct the default object size estimate for WriteTask
Results
- Possibly improved reuse of recycled write tasks
- Fewer virtual method calls and shorter path lengths
- Less code
Motivation
JMH 1.22 was released recently, we might as well use the latest when
running benchmarks.
Summary of changes:
https://mail.openjdk.java.net/pipermail/jmh-dev/2019-November/002879.html
Modifications
Update jmh dependencies in microbench module from version 1.21 to 1.22.
Result
Benchmarks run using latest JMH
Motivation:
By default CloseWebSocketFrames are handled automatically.
However I need manually manage their sending both on client- and on server-sides.
Modification:
Send close frame on channel close automatically, when it was not send before explicitly.
Result:
No more messages like "Connection closed by remote peer" for normal close flows.
Motivation:
Latest recommended maven version is 3.6.2 so we should use it
Modifications:
Update from 3.5.2 to 3.6.2
Result:
Use latest recommended maven version
Motivation:
In most cases, we want to use MultithreadEventLoopGroup such as NioEventLoopGroup without setting thread numbers but thread name only. So we need to use followed code:
NioEventLoopGroup boss = new NioEventLoopGroup(0, new DefaultThreadFactory("boss"));
It looks a bit confuse or strange for the number 0 due to we only want to set thread name. So it will be better to add new constructor for this case.
Modifications:
add new constructor into all event loop groups, for example: public NioEventLoopGroup(ThreadFactory threadFactory)
Result:
User can only set thread factory without setting the thread number to 0:
NioEventLoopGroup boss = new NioEventLoopGroup(new DefaultThreadFactory("boss"));
Motivation:
Netty currently doesn't build and distribute the test JARs. Having easy access to the test JARs would enable downstream projects (such as GraalVM) to integrate the Netty unit tests in their CI pipeline to ensure continous compatibility with Netty features. The alternative would be to build Netty from source every time to obtain the test jars, however, depending on the CI setup, that may not always be possible.
Modifications:
Modify `pom.xml` to enable generation of test JARs and corresponding source JARs.
Result:
Running the Maven build will create the test JARs and corresponding source JARs. This change was tested locally via `mvn install` and the test JARs are correctly copied under the Maven cache. The expectation is that running `mvn deploy` will also copy the additional JARs to the maven repository.
Motivation:
MINIMAL_WAIT is the key constant. Thus, When we see the constant, we must read more code logic to see if it is ms or ns. So improving java doc will be better.
Modifications:
Improve java doc by add "10ms" such as DEFAULT_CHECK_INTERVAL with "1s".
Result:
Easy to know it is ms and keep same java doc style with other constants such as DEFAULT_CHECK_INTERVAL.
Motivation:
Unify parameter validation's code style.
Modifications:
Change the parameter's validation statements to the method: ObjectUtil.checkNotNull.
Result:
The parameter's validation code will keep same style with other codes
Motivation:
Currently when use of Unsafe is disabled and an internal reallocation is
performed for a direct PooledByteBuf, a one-off temporary duplicate is
made of the source and destination backing nio buffers so that the
copy can be done in a threadsafe manner.
The need for this can be reduced by sharing the temporary duplicate
buffer that is already stored in the corresponding destination
PooledByteBuf instance.
Modifications:
Have PoolArena#memoryCopy(...) take the destination PooledByteBuf
instead of the underlying mem reference and offset, and use
internalNioBuffer() to obtain/initialize a reusable duplicate of the
backing nio buffer.
Result:
Fewer temporary allocations when resizing direct pooled ByteBufs in the
non-Unsafe case
Motivation:
Currently, the only way to create fixed-header only messages PINGREQ,
PINGRESP and DISCONNECT is to explicitly instantiate a `MqttFixedHeader` like:
```
MqttFixedHeader disconnectFixedHeader = new MqttFixedHeader(MqttMessageType.DISCONNECT,
false, MqttQoS.AT_MOST_ONCE, false, 0);
MqttMessage disconnectMessage = new MqttMessage(disconnectFixedHeader);
```
According to the MQTT spec
(http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718077),
the fixed-header flags for these messages are reserved and must be set to zero, otherwise
the receiver must close the connection. It's easy to mess this up when
you're creating the header explicitly, for e.g by setting the QoS bit to
`AT_LEAST_ONCE`.
As such, provide static constants for PINGREQ, PINGRESP and
DISCONNECT messages that will set the flags correctly for the developer.
Modification:
Add static constants to MqttMessage class to construct PINGREQ, PINGRESP and
DISCONNECT messages that will set the fixed-header flags correctly to 0.
Result:
Easier usage.
Motivation:
To avoid regression regarding connection-specific headers[1], we should add a test.
[1] https://tools.ietf.org/html/rfc7540#section-8.1.2.2
Modification:
Add test that checks the following headers are removed.
- Connection
- Host
- Keep-Alive
- Proxy-Connection
- Transfer-Encoding
- Upgrade
Result:
There's no functional change.
Motivation:
sun.security.ssl.X509KeyManagerImpl will not use "stable" aliases and so aliases may be changed during invocations. This means caching is useless. Because of this we should disable the cache if its used.
Modifications:
- Disable caching if sun.security.ssl.X509KeyManagerImpl is used
- Add tests
Result:
More protection against https://github.com/netty/netty/issues/9747.
Motivation:
https://github.com/netty/netty/pull/9548 introduced a change that creates a new AttributeKey
for each SimpleChannelPool instance created. AttributeKeys are cached statically in a ConstantPool
by the AttributeKey.newInstance method. Because of this, creating a SimpleChannelPool instance will
allocate memory that will never be released, even after the SimpleChannelPool is closed.
Modifications:
This change goes back to a single AttributeKey per SimpleChannelPool, just using a more specific
name to reduce the chance of conflicts with user code.
Result:
No memory is leaked after a SimpleChannelPool instance is created and destroyed.
Motivation:
At the moment te cache is not bound and so lead to huge memory consumpation. We should ensure its bound by default.
Modifications:
Ensure cache is bound
Result:
Fixes https://github.com/netty/netty/issues/9747.
Motivation
There's currently no way to determine whether an arbitrary ByteBuf
behaves internally like a "singluar" buffer or a composite one, and this
can be important to know when making decisions about how to manipulate
it in an efficient way.
An example of this is the ByteBuf#discardReadBytes() method which
increases the writable bytes for a contiguous buffer (by readerIndex)
but does not for a composite one.
Unfortunately !(buf instanceof CompositeByteBuf) is not reliable, since
for example this will be true in the case of a sliced CompositeByteBuf
or some third-party composite implementation.
isContiguous was chosen over isComposite since we want to assume "not
contiguous" in the unknown/default case - the doc will it clear that
false does not imply composite.
Modifications
- Add ByteBuf#isContiguous() which returns true by default
- Override the "concrete" ByteBuf impls to return true and ensure
wrapped/derived impls delegate it appropriately
- Include some basic unit tests
Result
Better assumptions/decisions possible when manipulating arbitrary
ByteBufs, for example when combining/cumulating them.
Motivation:
After fix#9377 some websocket examples work incorrect
Modification:
Replace `Unpooled.EMPTY_BUFFER` to `ctx.alloc().buffer(0)` for responses with possible content
Result:
Examples work
Motivation:
Padding was removed from CONTINUATION frame in http2-spec, as showed in [PR](https://github.com/http2/http2-spec/pull/510). We should follow it.
Modifications:
- Remove padding when writing CONTINUATION frame in DefaultHttp2FrameWriter
- Add a unit test for writing large header with padding
Result:
More spec-compliant
Motivation:
We can move some methods etc to make encapsulation better in Recycler
Modifications:
Move / rename methods to make usage more clear
Result:
Code cleanup
Motivation
Currently when future tasks are scheduled via EventExecutors from a
different thread, at least two allocations are performed - the
ScheduledFutureTask wrapping the to-be-run task, and a Runnable wrapping
the action to add to the scheduled task priority queue. The latter can
be avoided by incorporating this logic into the former.
Modification
- When scheduling or cancelling a future task from outside the event
loop, enqueue the task itself rather than wrapping in a Runnable
- Have ScheduledFutureTask#run first verify the task's deadline has
passed and if not add or remove it from the scheduledTaskQueue depending
on its cancellation state
- Add new outside-event-loop benchmarks to ScheduleFutureTaskBenchmark
Result
Fewer allocations when scheduling/cancelling future tasks
Motivation
The recycling ratio is currently implemented by comparing with a masked
count. The mask operation is not free and also not necessary.
Modification
Change the count(s) to just iterate over the corresponding interval,
which requires only a comparison and no mask.
Also make "first time recycle" behaviour consistent and revert change to
RecyclerTest made in #9727.
Result
Less recycling overhead
Motivation:
At the moment we only enfore ratioMask for the Stack which means that we only guard against recycle burts when recycled from the same Thread. We should also enforce the ratioMask in the WeakOrderQueue so we also guard against the bursts when recycle from other threads.
Modifications:
- Keep counter in WeakOrderQueue to enforce ratioMask as well
- Adjust unit test
Result:
Better guard against recycle bursts which could pollute the heap unnecessary.
Motivation:
If something is mis-configured, the "main" test will fail but it is unclear
whether it fails because the integration does not work or it wasn't applied
at all.
Also see:
https://github.com/netty/netty/issues/9738#issuecomment-548416693
Modifications:
This change adds a test that uses the same mechanism as BlockHound does
(`ServiceLoader`) and checks that `NettyBlockHoundIntegration` is present.
Result:
It is now clear whether the integration is not working or it wasn't loaded at all.
Motivation:
Java 13 requires special flags to be set to make BlockHound work
Modifications:
- Added jdk13 profile to `transport-blockhound-tests`
- Enabled `-XX:+AllowRedefinitionToAddDeleteMethods` on jdk13
Result:
The tests work on Java 13
Motivation
Currently the visibility of the various Recycler inner classes and their
fields isn't optimal. Some private members are accessed by other classes
resulting in synthetic methods, and other non-private classes/members
are only accessed privately and so can be made private.
Modifications
- Increase/reduce visibility of various fields/methods/classes within
Recycler
- Have WeakOrderQueue extend WeakReference<Thread> to eliminate the
owner field
- Change local DefaultHandle var to DefaultHandle<?> to avoid raw type
compiler warning
Result
Tidier code, fewer implicit methods on hot paths (reducing inlining
depths)
Motivation
This is already done internally for various reasons but it would make
sense i.m.o. as a top level concept: submitting a task to be run on the
event loop which doesn't need to run immediately but must still be
executed in FIFO order relative all other submitted tasks (be those
"lazy" or otherwise).
It's nice to separate this abstract "relaxed" semantic from concrete
implementations - the simplest is to just delegate to existing execute,
but for the main EL impls translates to whether a wakeup is required
after enqueuing.
Having a "global" abstraction also allows for simplification of our
internal use - for example encapsulating more of the common scheduled
future logic within AbstractScheduledEventExecutor.
Modifications
- Introduce public LazyRunnable interface and
AbstractEventExecutor#lazyExecute method (would be nice for this to be
added to EventExecutor interface in netty 5)
- Tweak existing SingleThreadEventExecutor mechanics to support these
- Replace internal use of NonWakeupRunnable (such as for pre-flush
channel writes)
- Uplift scheduling-related hooks into AbstractScheduledEventExecutor,
eliminating intermediate executeScheduledRunnable method
Result
Simpler code, cleaner and more useful/flexible abstractions - cleaner in
that they fully communicate the intent in a more general way, without
implying/exposing/restricting implementation details
Motivation:
We currently use a finalizer to ensure we correctly return the reserved back to the Stack but this is not really needed as we can ensure we return it when needed before dropping the WeakOrderQueue
Modifications:
Use explicit method call to ensure we return the reserved space back before dropping the object
Result:
Less finalizer usage and so less work for the GC
Motivation:
We null out the element in the array after we decrement the current size of the Stack but not directly write back the updated size to the stored field. This is problematic as we do some validation before we write it back and so may never do so if the validation fails. This then later can lead to have null objects returned where not expected
Modifications:
Update size directly after null out object
Result:
No more unexpected null value possible
##Motivation
The InternalLoggerFactory attempts to instantiate different logger
implementations to discover what is available on the class path,
accepting the first implementation that does not throw an exception.
Currently, the default ordering will attempt to instantiate a Log4j1
logger before Log4j2. For environments where both Log4j1 and Log4j2 are
available, this will result in using the older version. It seems that it
would be more intuitive to prefer the newer version, when possible.
##Modifications
Change the default ordering to attempt to use the Log4J2LoggerFactory
before the Log4JLoggerFactory.
##Result
For environments where both Log4j1 and Log4j2 are available on the class
path (but Slf4J is not available), Netty will now use Log4j2 instead of
Log4j1.
### Motivation:
Introduction of `WebSocketDecoderConfig` made our server-side code more elegant and simpler for support.
However there is still some problem with maintenance and new features development for WebSocket codecs (`WebSocketServerProtocolHandler`, `WebSocketServerProtocolHandler`).
Particularly, it makes me ~~crying with blood~~ extremely sad to add new parameter and yet another one constructor into these handlers, when I want to contribute new feature.
### Modification:
I've extracted all parameters for client and server WebSocket handlers into config/builder structures, like it was made for decoders in PR #9116.
### Result:
* Fixes#9698: Simplify WebSocket handlers constructor arguments hell
* Unblock further development in this module (configurable close frame handling on server-side; automatic close-frame sending, when missed; memory leaks on protocol violations; etc...)
Bonuses:
* All defaults are gathered in one place and could be easily found/reused.
* New API greatly simplifies usage, but does NOT allow inheritance or modification.
* New API would simplify long-term maintenance of WebSockets module.
### Example
WebSocketClientProtocolConfig config = WebSocketClientProtocolConfig.newBuilder()
.webSocketUri("wss://localhost:8443/fx-spot")
.subprotocol("trading")
.handshakeTimeoutMillis(15000L)
.build();
ctx.pipeline().addLast(new WebSocketClientProtocolHandler(config));
Motivation:
The javadocs of Http2Headers.method(...) are incorrect, we should fix these.
Modifications:
Correct javadocs
Result:
Fixes https://github.com/netty/netty/issues/8068.
Motivation:
At the moment we miss to poll the method queue when we see an Informational response code. This can lead to out-of-sync of request / response pairs when later try to compare these.
Modifications:
Always poll the queue correctly
Result:
Always compare the correct request / response pairs
Motivation:
If maxDelayedQueues == 0 we should never put any WeakHashMap into the FastThreadLocal for a Thread.
Modifications:
Check if maxDelayedQueues == 0 and if so return directly. This will ensure we never call FastThreadLocal.initialValue() in this case
Result:
Less overhead / memory usage when maxDelayedQueues == 0
Motivation:
On MacOS it is not really good enough to check /etc/resolv.conf to determine the nameservers to use. We should retrieve the nameservers using the same way as mDNSResponser and chromium does by doing a JNI call.
Modifications:
Add MacOSDnsServerAddressStreamProvider and testcase
Result:
Use correct nameservers by default on MacOS.
Motivation:
Easier to debug SelfSignedCertificate failures.
Modifications:
Add first throwable as suppressed to thrown exception.
Result:
Less technical debt.
Motivation:
HTTP 102 (WebDAV) is not correctly treated as an informational response
Modification:
Delegate all `1XX` status codes to superclass, not just `100` and `101`.
Result:
Supports WebDAV response.
Removes a huge maintenance [headache](https://github.com/line/armeria/pull/2210) in Armeria which has forked the class for these features
Motivation:
Netty HTTP/2 implementation is not 100% compliant to the spec. This
commit improves the compliance regarding headers validation,
in particular pseudo-headers and connection ones.
According to the spec:
All HTTP/2 requests MUST include exactly one valid value for the
":method", ":scheme", and ":path" pseudo-header fields, unless it is
a CONNECT request (Section 8.3). An HTTP request that omits
mandatory pseudo-header fields is malformed (Section 8.1.2.6).
Modifications:
- Introduce Http2HeadersValidator class capable of validating HTTP/2
headers
- Invoke validation from DefaultHttp2ConnectionDecoder#onHeadersRead
- Modify tests to use valid headers when required
- Modify HttpConversionUtil#toHttp2Headers to not add :scheme and
:path header on CONNECT method in order to conform to the spec
Result:
- Initial requests without :method, :path, :scheme will fail
- Initial requests with multiple values for :method, :path, :scheme
will fail
- Initial requests with an empty :path fail
- Requests with connection-specific header field will fail
- Requests with TE header different than "trailers" will fail
-
- Fixes 8.1.2.2 tests from h2spec #5761
- Fixes 8.1.2.3 tests from h2spec #5761
Motivation:
At the moment we directly extend the Recycler base class in our code which makes it hard to experiment with different Object pool implementation. It would be nice to be able to switch from one to another by using a system property in the future. This would also allow to more easily test things like https://github.com/netty/netty/pull/8052.
Modifications:
- Introduce ObjectPool class with static method that we now use internally to obtain an ObjectPool implementation.
- Wrap the Recycler into an ObjectPool and return it for now
Result:
Preparation for different ObjectPool implementations