Those who need 'Origin' or 'Sec-WebSocket-Origin' headers should provide them explicitly, like it is stated in WebSocket specs.
E.g. through custom headers:
HttpHeaders customHeaders = new DefaultHttpHeaders()
.add(HttpHeaderNames.ORIGIN, "http://localhost:8080");
new WebSocketClientProtocolHandler(
new URI("ws://localhost:1234/test"), WebSocketVersion.V13, subprotocol,
allowExtensions, customHeaders, maxFramePayloadLength, handshakeTimeoutMillis)
* Remove enforced origin headers.
* Update tests
Fixes#9673: Origin header is always sent from WebSocket client
Motivation:
Http2ConnectionHandler tries to guard against sending multiple RST frames for the same stream. Unfortunally the code is not 100 % correct as it only updates the state after it calls write. This may lead to the situation of have an extra RST frame slip through if the second write for the RST frame is done from a listener that is attached to the promise.
Modifications:
- Update state before calling write
- Add unit test
Result:
Only ever send one RST frame per stream
Motivation:
Modern versions of FreeBSD define IP_RECVORIGDSTADDR, but don't define
SOL_IP. This causes the build to fail.
Modifications:
The equivalent to SOL_IP on FreeBSD is IPPROTO_IP. Define SOL_IP as that
if SOL_IP is not defined and IPPROTO_IP is.
Result:
This allows a successful build on FreeBSD
Motivation:
This is a PR to solve the problem described here: https://github.com/netty/netty/issues/9767
Basically this PR is to add two more APIs in SslContextBuilder, for users to directly specify
the KeyManager or TrustManager they want to use when building SslContext. This is very helpful
when users want to pass in some customized implementation of KeyManager or TrustManager.
Modification:
This PR takes the first approach in here:
https://github.com/netty/netty/issues/9767#issuecomment-551927994 (comment)
which is to immediately convert the managers into factories and let factories continue to pass
through Netty.
1. Add in SslContextBuilder the two APIs mentioned above
2. Create a KeyManagerFactoryWrapper and a TrustManagerFactoryWrapper, which take a KeyManager
and a TrustManager respectively. These are two simple wrappers that do the conversion from
XXXManager class to XXXManagerFactory class
3.Create a SimpleKeyManagerFactory class(and internally X509KeyManagerWrapper for compatibility),
which hides the unnecessary details such as KeyManagerFactorySpi. This serves the similar
functionalities with SimpleTrustManagerFactory, which was already inside Netty.
Result:
Easier usage.
Motivation:
21720e4a78 introduced a change which aimed to enable the not_x86_64 profile when building on a x86_64 platform. Unfortunaly it made an assemption which not holds true and so the profile was already enabled. This lead to the situation that native SSL tests were skipped if non boringssl impl was used.
Modifications:
Fix profile activation to work as expected
Result:
Correctly run aal native SSL tests
Motivation:
There is an intrinsic race between a local session resetting a stream
and the peer no longer sending any frames. This can result in the
session receiving frames for a stream that the local peer no longer
tracks. This results in a StreamException being thrown which triggers a
RST_STREAM frame, which is a good thing, but also logging at level WARN,
which is noisy for an expected and benign condition.
Modification:
Change the log level to DEBUG when logging stream errors with code
STREAM_CLOSED. All others are more interesting and will continue to be
logged at level WARN.
Additionally, it was found that DATA frames for streams that could not
have existed only resulted in a StreamException when the spec is clear
that such a situation should be fatal to the connection, resulting in a
GOAWAY(PROTOCOL_ERROR).
Fixes#8025.
Motivation:
394a1b3485 introduced a hard dependency on GLIBC 2.12 which was not the case before. This had the effect of not be able to use the native epoll transports on platforms which ship with earlier versions of GLIBC.
To make things a backward compatible as possible we should not introduce such changes in a bugfix release.
Special thanks to @weissi with all the help to fix this.
Modifications:
- Use syscalls directly to remove dependency on GLIBC 2.12
- Make code consistent that needs newer GLIBC versions
- Adjust scattering read test to only run if recvmmsg syscall is supported
- Cleanup pom.xml as some stuff is not needed anymore after using syscalls.
Result:
Fixes https://github.com/netty/netty/issues/9758.
Motivation:
In netty 5.x we changed to have all pipeline operations be executed on the EventLoop so there is no need to have an atomic operation involved anymore to update the handler state.
Modifications:
Remove atomic usage to handle the handler state
Result:
Simpler code and less overhead
Motivation:
7ff8cde66f introduced some tests which needs some small adjustments for master.
Modifications:
Add explicit casts
Result:
master builds again without test failures
Motivation
JMH 1.22 was released recently, we might as well use the latest when
running benchmarks.
Summary of changes:
https://mail.openjdk.java.net/pipermail/jmh-dev/2019-November/002879.html
Modifications
Update jmh dependencies in microbench module from version 1.21 to 1.22.
Result
Benchmarks run using latest JMH
Motivation:
By default CloseWebSocketFrames are handled automatically.
However I need manually manage their sending both on client- and on server-sides.
Modification:
Send close frame on channel close automatically, when it was not send before explicitly.
Result:
No more messages like "Connection closed by remote peer" for normal close flows.
Motivation:
Latest recommended maven version is 3.6.2 so we should use it
Modifications:
Update from 3.5.2 to 3.6.2
Result:
Use latest recommended maven version
Motivation:
In most cases, we want to use MultithreadEventLoopGroup without setting thread numbers but thread name only. We should simplify this for the user
Modifications:
Add a new constructor
Result:
User can only set ThreadFactory / Executor without setting the thread number to 0:
Motivation:
Netty currently doesn't build and distribute the test JARs. Having easy access to the test JARs would enable downstream projects (such as GraalVM) to integrate the Netty unit tests in their CI pipeline to ensure continous compatibility with Netty features. The alternative would be to build Netty from source every time to obtain the test jars, however, depending on the CI setup, that may not always be possible.
Modifications:
Modify `pom.xml` to enable generation of test JARs and corresponding source JARs.
Result:
Running the Maven build will create the test JARs and corresponding source JARs. This change was tested locally via `mvn install` and the test JARs are correctly copied under the Maven cache. The expectation is that running `mvn deploy` will also copy the additional JARs to the maven repository.
Motivation:
MINIMAL_WAIT is the key constant. Thus, When we see the constant, we must read more code logic to see if it is ms or ns. So improving java doc will be better.
Modifications:
Improve java doc by add "10ms" such as DEFAULT_CHECK_INTERVAL with "1s".
Result:
Easy to know it is ms and keep same java doc style with other constants such as DEFAULT_CHECK_INTERVAL.
Motivation:
Currently when use of Unsafe is disabled and an internal reallocation is
performed for a direct PooledByteBuf, a one-off temporary duplicate is
made of the source and destination backing nio buffers so that the
copy can be done in a threadsafe manner.
The need for this can be reduced by sharing the temporary duplicate
buffer that is already stored in the corresponding destination
PooledByteBuf instance.
Modifications:
Have PoolArena#memoryCopy(...) take the destination PooledByteBuf
instead of the underlying mem reference and offset, and use
internalNioBuffer() to obtain/initialize a reusable duplicate of the
backing nio buffer.
Result:
Fewer temporary allocations when resizing direct pooled ByteBufs in the
non-Unsafe case
Motivation:
Currently, the only way to create fixed-header only messages PINGREQ,
PINGRESP and DISCONNECT is to explicitly instantiate a `MqttFixedHeader` like:
```
MqttFixedHeader disconnectFixedHeader = new MqttFixedHeader(MqttMessageType.DISCONNECT,
false, MqttQoS.AT_MOST_ONCE, false, 0);
MqttMessage disconnectMessage = new MqttMessage(disconnectFixedHeader);
```
According to the MQTT spec
(http://docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html#_Toc398718077),
the fixed-header flags for these messages are reserved and must be set to zero, otherwise
the receiver must close the connection. It's easy to mess this up when
you're creating the header explicitly, for e.g by setting the QoS bit to
`AT_LEAST_ONCE`.
As such, provide static constants for PINGREQ, PINGRESP and
DISCONNECT messages that will set the flags correctly for the developer.
Modification:
Add static constants to MqttMessage class to construct PINGREQ, PINGRESP and
DISCONNECT messages that will set the fixed-header flags correctly to 0.
Result:
Easier usage.
Motivation:
To avoid regression regarding connection-specific headers[1], we should add a test.
[1] https://tools.ietf.org/html/rfc7540#section-8.1.2.2
Modification:
Add test that checks the following headers are removed.
- Connection
- Host
- Keep-Alive
- Proxy-Connection
- Transfer-Encoding
- Upgrade
Result:
There's no functional change.
Motivation:
sun.security.ssl.X509KeyManagerImpl will not use "stable" aliases and so aliases may be changed during invocations. This means caching is useless. Because of this we should disable the cache if its used.
Modifications:
- Disable caching if sun.security.ssl.X509KeyManagerImpl is used
- Add tests
Result:
More protection against https://github.com/netty/netty/issues/9747.
Motivation:
At the moment te cache is not bound and so lead to huge memory consumpation. We should ensure its bound by default.
Modifications:
Ensure cache is bound
Result:
Fixes https://github.com/netty/netty/issues/9747.
Motivation
There's currently no way to determine whether an arbitrary ByteBuf
behaves internally like a "singluar" buffer or a composite one, and this
can be important to know when making decisions about how to manipulate
it in an efficient way.
An example of this is the ByteBuf#discardReadBytes() method which
increases the writable bytes for a contiguous buffer (by readerIndex)
but does not for a composite one.
Unfortunately !(buf instanceof CompositeByteBuf) is not reliable, since
for example this will be true in the case of a sliced CompositeByteBuf
or some third-party composite implementation.
isContiguous was chosen over isComposite since we want to assume "not
contiguous" in the unknown/default case - the doc will it clear that
false does not imply composite.
Modifications
- Add ByteBuf#isContiguous() which returns true by default
- Override the "concrete" ByteBuf impls to return true and ensure
wrapped/derived impls delegate it appropriately
- Include some basic unit tests
Result
Better assumptions/decisions possible when manipulating arbitrary
ByteBufs, for example when combining/cumulating them.
Motivation:
Padding was removed from CONTINUATION frame in http2-spec, as showed in [PR](https://github.com/http2/http2-spec/pull/510). We should follow it.
Modifications:
- Remove padding when writing CONTINUATION frame in DefaultHttp2FrameWriter
- Add a unit test for writing large header with padding
Result:
More spec-compliant
Motivation
The recycling ratio is currently implemented by comparing with a masked
count. The mask operation is not free and also not necessary.
Modification
Change the count(s) to just iterate over the corresponding interval,
which requires only a comparison and no mask.
Also make "first time recycle" behaviour consistent and revert change to
RecyclerTest made in #9727.
Result
Less recycling overhead
Motivation:
We can move some methods etc to make encapsulation better in Recycler
Modifications:
Move / rename methods to make usage more clear
Result:
Code cleanup
Motivation
Per javadoc in 4.1.x SimpleChannelInboundHandler:
"Please keep in mind that channelRead0(ChannelHandlerContext, I) will be
renamed to messageReceived(ChannelHandlerContext, I) in 5.0."
Modifications
Rename aforementioned method and all references/overrides.
Result
Method is renamed.
Motivation:
At the moment we only enfore ratioMask for the Stack which means that we only guard against recycle burts when recycled from the same Thread. We should also enforce the ratioMask in the WeakOrderQueue so we also guard against the bursts when recycle from other threads.
Modifications:
- Keep counter in WeakOrderQueue to enforce ratioMask as well
- Adjust unit test
Result:
Better guard against recycle bursts which could pollute the heap unnecessary.
Motivation:
If something is mis-configured, the "main" test will fail but it is unclear
whether it fails because the integration does not work or it wasn't applied
at all.
Also see:
https://github.com/netty/netty/issues/9738#issuecomment-548416693
Modifications:
This change adds a test that uses the same mechanism as BlockHound does
(`ServiceLoader`) and checks that `NettyBlockHoundIntegration` is present.
Result:
It is now clear whether the integration is not working or it wasn't loaded at all.
Motivation:
Java 13 requires special flags to be set to make BlockHound work
Modifications:
- Added jdk13 profile to `transport-blockhound-tests`
- Enabled `-XX:+AllowRedefinitionToAddDeleteMethods` on jdk13
Result:
The tests work on Java 13
Motivation
Currently the visibility of the various Recycler inner classes and their
fields isn't optimal. Some private members are accessed by other classes
resulting in synthetic methods, and other non-private classes/members
are only accessed privately and so can be made private.
Modifications
- Increase/reduce visibility of various fields/methods/classes within
Recycler
- Have WeakOrderQueue extend WeakReference<Thread> to eliminate the
owner field
- Change local DefaultHandle var to DefaultHandle<?> to avoid raw type
compiler warning
Result
Tidier code, fewer implicit methods on hot paths (reducing inlining
depths)
Motivation:
We currently use a finalizer to ensure we correctly return the reserved back to the Stack but this is not really needed as we can ensure we return it when needed before dropping the WeakOrderQueue
Modifications:
Use explicit method call to ensure we return the reserved space back before dropping the object
Result:
Less finalizer usage and so less work for the GC
Motivation:
We null out the element in the array after we decrement the current size of the Stack but not directly write back the updated size to the stored field. This is problematic as we do some validation before we write it back and so may never do so if the validation fails. This then later can lead to have null objects returned where not expected
Modifications:
Update size directly after null out object
Result:
No more unexpected null value possible
##Motivation
The InternalLoggerFactory attempts to instantiate different logger
implementations to discover what is available on the class path,
accepting the first implementation that does not throw an exception.
Currently, the default ordering will attempt to instantiate a Log4j1
logger before Log4j2. For environments where both Log4j1 and Log4j2 are
available, this will result in using the older version. It seems that it
would be more intuitive to prefer the newer version, when possible.
##Modifications
Change the default ordering to attempt to use the Log4J2LoggerFactory
before the Log4JLoggerFactory.
##Result
For environments where both Log4j1 and Log4j2 are available on the class
path (but Slf4J is not available), Netty will now use Log4j2 instead of
Log4j1.
### Motivation:
Introduction of `WebSocketDecoderConfig` made our server-side code more elegant and simpler for support.
However there is still some problem with maintenance and new features development for WebSocket codecs (`WebSocketServerProtocolHandler`, `WebSocketServerProtocolHandler`).
Particularly, it makes me ~~crying with blood~~ extremely sad to add new parameter and yet another one constructor into these handlers, when I want to contribute new feature.
### Modification:
I've extracted all parameters for client and server WebSocket handlers into config/builder structures, like it was made for decoders in PR #9116.
### Result:
* Fixes#9698: Simplify WebSocket handlers constructor arguments hell
* Unblock further development in this module (configurable close frame handling on server-side; automatic close-frame sending, when missed; memory leaks on protocol violations; etc...)
Bonuses:
* All defaults are gathered in one place and could be easily found/reused.
* New API greatly simplifies usage, but does NOT allow inheritance or modification.
* New API would simplify long-term maintenance of WebSockets module.
### Example
WebSocketClientProtocolConfig config = WebSocketClientProtocolConfig.newBuilder()
.webSocketUri("wss://localhost:8443/fx-spot")
.subprotocol("trading")
.handshakeTimeoutMillis(15000L)
.build();
ctx.pipeline().addLast(new WebSocketClientProtocolHandler(config));
Motivation:
The javadocs of Http2Headers.method(...) are incorrect, we should fix these.
Modifications:
Correct javadocs
Result:
Fixes https://github.com/netty/netty/issues/8068.
Motivation:
At the moment we miss to poll the method queue when we see an Informational response code. This can lead to out-of-sync of request / response pairs when later try to compare these.
Modifications:
Always poll the queue correctly
Result:
Always compare the correct request / response pairs
Motivation:
If maxDelayedQueues == 0 we should never put any WeakHashMap into the FastThreadLocal for a Thread.
Modifications:
Check if maxDelayedQueues == 0 and if so return directly. This will ensure we never call FastThreadLocal.initialValue() in this case
Result:
Less overhead / memory usage when maxDelayedQueues == 0
Motivation:
On MacOS it is not really good enough to check /etc/resolv.conf to determine the nameservers to use. We should retrieve the nameservers using the same way as mDNSResponser and chromium does by doing a JNI call.
Modifications:
Add MacOSDnsServerAddressStreamProvider and testcase
Result:
Use correct nameservers by default on MacOS.
Motivation:
Easier to debug SelfSignedCertificate failures.
Modifications:
Add first throwable as suppressed to thrown exception.
Result:
Less technical debt.
Motivation:
HTTP 102 (WebDAV) is not correctly treated as an informational response
Modification:
Delegate all `1XX` status codes to superclass, not just `100` and `101`.
Result:
Supports WebDAV response.
Removes a huge maintenance [headache](https://github.com/line/armeria/pull/2210) in Armeria which has forked the class for these features
Motivation:
Netty HTTP/2 implementation is not 100% compliant to the spec. This
commit improves the compliance regarding headers validation,
in particular pseudo-headers and connection ones.
According to the spec:
All HTTP/2 requests MUST include exactly one valid value for the
":method", ":scheme", and ":path" pseudo-header fields, unless it is
a CONNECT request (Section 8.3). An HTTP request that omits
mandatory pseudo-header fields is malformed (Section 8.1.2.6).
Modifications:
- Introduce Http2HeadersValidator class capable of validating HTTP/2
headers
- Invoke validation from DefaultHttp2ConnectionDecoder#onHeadersRead
- Modify tests to use valid headers when required
- Modify HttpConversionUtil#toHttp2Headers to not add :scheme and
:path header on CONNECT method in order to conform to the spec
Result:
- Initial requests without :method, :path, :scheme will fail
- Initial requests with multiple values for :method, :path, :scheme
will fail
- Initial requests with an empty :path fail
- Requests with connection-specific header field will fail
- Requests with TE header different than "trailers" will fail
-
- Fixes 8.1.2.2 tests from h2spec #5761
- Fixes 8.1.2.3 tests from h2spec #5761
Motivation:
At the moment we directly extend the Recycler base class in our code which makes it hard to experiment with different Object pool implementation. It would be nice to be able to switch from one to another by using a system property in the future. This would also allow to more easily test things like https://github.com/netty/netty/pull/8052.
Modifications:
- Introduce ObjectPool class with static method that we now use internally to obtain an ObjectPool implementation.
- Wrap the Recycler into an ObjectPool and return it for now
Result:
Preparation for different ObjectPool implementations