Motivation:
We should throw a NotYetConnectedException when ENOTCONN errno is set. This is also consistent with NIO.
Modification:
Throw correct exception and add test case
Result:
More correct and consistent behavior.
Motivation:
We use often javachannel().socket().* in NIO as these methods exists in java6. The problem is that these will throw often very general Exceptions (Like SocketException) while it is more expected to throw the Exceptions listed in the nio interfaces. When possible we should use the new methods available in java7+ which throw the correct exceptions.
Modifications:
Check for java version and depending on it using the socket or the javachannel.
Result:
Throw expected Exceptions.
Motivation:
To make it easier to debug connect exceptions we create new exceptions which also contain the remote address. For this we basically created a new instance and call setStackTrace(...). When doing this we pay an extra penality because it calls fillInStackTrace() when calling the super constructor.
Modifications:
Create special sub-classes of Exceptions that override the fillInStackTrace() method and so eliminate the overhead.
Result:
Less overhead when "annotate" connect exceptions.
Motivation:
When SslHandler.close(...) is called (as part of Channel.close()). it will also try to flush pending messages. This may fail for various reasons, but we still should propergate the close operation
Modifications:
- Ensure flush(...) itself will not throw an Exception if we was able to at least fail one pending promise (which should always be the case).
- If flush(...) fails as part of close ensure we still close the channel and then rethrow.
Result:
No more lost close operations possible if an exception is thrown during close
Motivation:
SystemPropertyUtil requires InternalLoggerFactory requires ThreadLocalRandom requires SystemPropertyUtil. This can lead to a dead-lock.
Modifications:
Ensure ThreadLocalRandom does not require SystemPropertyUtil during initialization.
Result:
No more deadlock possible.
Motivation:
Comments stating that AUTO_CLOSE will be removed in Netty 5.0 are wrong,
as there is no Netty 5.0.
Modifications:
Removed comment.
Result:
No more references to Netty 5.0
Motivation:
RFC 6265 does not state that cookie names must be case insensitive.
Modifications:
Fix io.netty.handler.codec.http.cookie.DefaultCookie#equals() method to
use case sensitive String#equals() and String#compareTo().
Result:
It is possible to parse several cookies with same names but with
different cases.
Motivation:
ReferenceCountedOpenSslEngine depends upon the the SslContext to cleanup JNI resources. If we don't wait until the ReferenceCountedOpenSslEngine is done with cleanup before cleaning up the SslContext we may crash the JVM.
Modifications:
- Wait for the channels to close (and thus the ReferenceCountedOpenSslEngine to be cleaned up) before cleaning up the associated SslContext.
Result:
Cleanup sequencing is correct and no more JVM crash.
Fixes https://github.com/netty/netty/issues/5692
Motivation:
In DefaultHttp2ConnectionEncoder we fail the promise in in the FlowControlledData.error(...) method but also add it the CoalescingBufferQueue. Which can lead to have the promise failed by error(...) before it can be failed in CoalescingBufferQueue.
This can lead to confusing and missleading errors in the log like:
2016-08-12 09:47:43,716 nettyIoExecutorGroup-1-9 [WARN ] PromiseNotifier - Failed to mark a promise as failure because it's done already: DefaultChannelPromise@374225e0(failure: javax.net.ssl.SSLException: SSLEngine closed already)
javax.net.ssl.SSLException: SSLEngine closed already
at io.netty.handler.ssl.SslHandler.wrap(...)(Unknown Source) ~[netty-all-4.1.5.Final-SNAPSHOT.jar:?]
Modifications:
Ensure we only fail the queue (which will also fail the promise).
Result:
No more missleading logs.
Motivation:
PendingWriteQueue should guard against re-entrant writes once removeAndWriteAll() is run.
Modifications:
Continue writing until queue is empty.
Result:
Correctly guard against re-entrance.
Motivation:
We should fail all promises with the correct SSLENGINE_CLOSED exception one the engine is closed. We did not fail the current promise with this exception if the ByteBuf was not readable.
Modifications:
Correctly fail promises.
Result:
More correct handling of promises if the SSLEngine is closed.
Motivation:
As per the HTTP/2 spec, exceeding the MAX_CONCURRENT_STREAMS should be treated as a stream error as opposed to a connection error.
"An endpoint that receives a HEADERS frame that causes its advertised concurrent stream limit to be exceeded MUST treat this as a stream error (Section 5.4.2) of type PROTOCOL_ERROR or REFUSED_STREAM." http://httpwg.org/specs/rfc7540.html#rfc.section.5.1.2
Modifications:
Make the error a stream error.
Result:
It's a stream error.
Motivation:
Sometimes it is useful to be able to wrap an existing memory address (a.k.a pointer) and create a ByteBuf from it. This way its easier to interopt with other libraries.
Modifications:
Add a new Unpooled.wrappedBuffer(....) method that takes a memory address.
Result:
Be able to wrap an existing memory address into a ByteBuf.
Motivation:
As the issue #5539 say, the OpenSsl.class will throw `java.lang.UnsatisfiedLinkError: org.apache.tomcat.jni.Library.version(I)I` when it is invoked. This path try to resolve the problem by modifying the native library loading logic of OpenSsl.class.
Modifications:
The OpenSsl.class loads the tcnative lib by `NativeLibraryLoader.loadFirstAvailable()`. The native library will be loaded in the bundle `netty-common`'s ClassLoader, which is diff with the native class's ClassLoader. That is the root cause of throws `UnsatisfiedLinkError` when the native method is invoked.
So, it should load the native library by the its bundle classloader firstly, then the embedded resources if failed.
Result:
First of all, the error threw by native method problem will be resolved.
Secondly, the native library should work as normal in non-OSGi env. But, this is hard. The loading logic of `Library.class` in `netty-tcnative` bundle is simple: try to load the library in PATH env. If not found, it falls back to the originally logic `NativeLibraryLoader.loadFirstAvailable()`.
Signed-off-by: XU JINCHUAN <xsir@msn.com>
Motivation:
The CorsHandler currently returns the Access-Control-Allow-Headers
header as on a Non-Preflight CORS request (Simple request).
As per the CORS specification the Access-Control-Allow-Headers header
should only be returned on Preflight requests. (not on simple requests).
https://www.w3.org/TR/2014/REC-cors-20140116/#access-control-allow-headers-response-headerhttp://www.html5rocks.com/static/images/cors_server_flowchart.png
Modifications:
Modified CorsHandler.java to not add the Access-Control-Allow-Headers
header when responding to Non-preflight CORS request.
Result:
Access-Control-Allow-Headers header will not be returned on a Simple
request (Non-preflight CORS request).
Motivation:
Commit b963595988 added a unit that will not work when KeyManagerFactory is used.
Modifications:
Only run the test if OpenSsl.useKeyManagerFactory() returns false.
Result:
Builds with boringssl
Motivation:
The HTTP Static File Server seems to ignore filenames that doesn't contains only latin characters, but these days people wish to serve files in other languages, or even include some emojis in the filename. Although these files are not displayed on the directory listing, they are accessible by HTTP requests. This fix will make such files more visible.
Modifications:
I've changed the ALLOWED_FILE_NAME pattern to disallow only files that starts with underline, minus or a dot (such as .htaccess), and hide other "unsafe" filenames that may be used to trigger some security issues. Other filenames, including the space character are allowed.
I've also added charset encoding to the directory listing, because the browser default MAY be configured for ISO-8859-1 instead of UTF-8.
Result:
Directory listing will work for files that contains the space character, as well as other Unicode characters.
Motivation:
When Netty HTTP Static File Server does directory listing, it does expose the user.dir environment variable to the user. Although it doesn't a security issue, it is a bad practice to show it, and the user does expect to see the server virtual root instead, which is the absolute path as mentioned in the RFC.
Modifications:
the sendListing method receives a third argument, which is the requested URI, and this is what should be displayed on the page instead of the filesystem path.
Result:
The directory listing pages will show the virtual path as described in the URI and not the real filesystem path.
Removed fallback method
Motivation:
The private key and certificate that are passed into #serKeyMaterial() could be PemEncoded in which case the #toPEM() methods return the identity of the value.
That in turn will fail in the #toBIO() step because the underlying ByteBuf is not necessarily direct.
Modifications:
- Use toBIO(...) which also works with non direct PemEncoded values
- Add unit test.
Result:
Correct handling of PemEncoded.
Motivation:
Socks5 proxy supports resolve domain at the server side. When testing
with curl, the SocksServer in example package only works for proxy
request with IP, not with domain name (`--socks5` vs
`--socks5-hostname`). As curl is widely used, it should work with
the example provided.
Modifications:
Passing address and port to the Socks5CommandResponse, so that it
works for curl.
Result:
`curl --socks5-hostname` works as expected.
Motivation:
Its completely fine to start writing before the handshake completes when using SslHandler. The writes will be just queued.
Modifications:
Remove the missleading and incorrect javadoc.
Result:
Correct javadoc.
Motivation:
21e8d84b79 changed the way bounds checking was done, but however a bounds check in the case of READ_LITERAL_HEADER_NAME_LENGTH_PREFIX was using an old value. This would delay when the bounds check would actually be done and potentially allow more allocation than necessary.
Modifications:
- Use the new length (index) in the bounds check instead of an old length (nameLength) which had not yet been assigned to the new value.
Result:
More correct bounds checking.
Motivation:
The HPACK decoder keeps state so that the decode method can be called multiple times with successive header fragments. This decoder also requires that a method is called to signify the decoding is complete. At this point status is returned to indicate if the max header size has been violated. Netty always accumulates the header fragments into a single buffer before attempting to HPACK decode process and so keeping state and delaying notification that bounds have been exceeded is not necessary.
Modifications:
- HPACK Decoder#decode(..) now must be called with a complete header block
- HPACK will terminate immediately if the maximum header length, or maximum number of headers is exceeded
- Reduce member variables in the HPACK Decoder class because they can now live in the decode(..) method
Result:
HPACK bounds checking is done earlier and less class state is needed.
Motivation:
765e944d4d imposed a limit on the maximum number of stream in all states. However the default limit did not allow room for streams in addition to SETTINGS_MAX_CONCURRENT_STREAMS. This can mean streams in states outside the states which SETTINGS_MAX_CONCURRENT_STREAMS applies to may not be reliably created.
Modifications:
- The default limit should be larger than SETTINGS_MAX_CONCURRENT_STREAMS
Result:
More lenient limit is applied to maxStreams by default.
Motivation:
SETTINGS_MAX_CONCURRENT_STREAMS does not apply to idle streams and thus we do not apply any explicit limitations on how many idle streams can be created. This may allow a peer to consume an undesirable amount of resources.
Modifications:
- Each Endpoint should enforce a limit for streams in a any state. By default this limit will be the same as SETTINGS_MAX_CONCURRENT_STREAMS but can be overridden if necessary.
Result:
There is now a limit to how many IDLE streams can be created.
Motivation:
If netty is used in a tomcat container tomcat itself may ship tcnative. Because of this we will try to use OpenSsl in netty and fail because it is different to netty-tcnative.
Modifications:
Ensure if we find tcnative it is really netty-tcnative before using it.
Result:
No more problems when using netty in a tomcat container that also has tcnative installed.
Motivation:
We not need to mark the field as volatile and so this may confuse people.
Modifications:
Remove volatile and add comment to explain why its not needed.
Result:
More correct example.
Motivation:
When try to call Cleaner.run() via reflection on Java9 you may see an IllegalAccessException.
Modifications:
Just cast the Cleaner to Runnable to prevent IllegalAccessException to be raised.
Result:
Free direct buffers also work on Java9+ as expected.
Motivation:
We need to ensure we only call ReferenceCountUtil.safeRelease(...) in finalize() if the refCnt() > 0 as otherwise we will log a message about IllegalReferenceCountException.
Modification:
Check for a refCnt() > 0 before try to release
Result:
No more IllegalReferenceCountException produced when run finalize() on OpenSsl* objects that where explicit released before.
Motivation:
netty-tcnative API has changed to remove a feature that contributed to a memory leak.
Modifications:
- Update to use the modified netty-tcnative API
Result:
Netty can use the latest netty-tcnative.
Motivation:
Our current strategy in NativeLibraryLoader is to mark the temporary .so file to be deleted on JVM exit. This has the drawback to not delete the file in the case the JVM dies or is killed.
Modification:
Just directly try to delete the file one we loaded the native library and if this fails mark the file to be removed once the JVM exits.
Result:
Less likely to have temporary files still on the system in case of JVM kills.
Motivation:
The default limit for the maximum amount of bytes that a method will be inlined is 35 bytes. AbstractByteBuf#forEach and AbstractByteBuf#forEachDesc comprise of method calls which are more than this maximum default threshold and may prevent or delay inlining for occuring. The byte code for these methods can be reduced to allow for easier inlining. Here are the current byte code sizes:
AbstractByteBuf::forEachByte (24 bytes)
AbstractByteBuf::forEachByte(int,int,..) (14 bytes)
AbstractByteBuf::forEachByteAsc0 (71 bytes)
AbstractByteBuf::forEachByteDesc (24 bytes)
AbstractByteBuf::forEachByteDesc(int,int,.) (24 bytes)
AbstractByteBuf::forEachByteDesc0 (69 bytes)
Modifications:
- Reduce the code for each method in the AbstractByteBuf#forEach and AbstractByteBuf#forEachDesc call stack
Result:
AbstractByteBuf::forEachByte (25 bytes)
AbstractByteBuf::forEachByte(int,int,..) (25 bytes)
AbstractByteBuf::forEachByteAsc0 (29 bytes)
AbstractByteBuf::forEachByteDesc (25 bytes)
AbstractByteBuf::forEachByteDesc(int,int,..) (27 bytes)
AbstractByteBuf::forEachByteDesc0 (29 bytes)
Motivation:
Http2ConnectionDecoder#localSettings(Http2Settings) is not used in codec-http2 and currently results in duplicated code.
Modifications:
- Remove Http2ConnectionDecoder#localSettings(Http2Settings)
Result:
Smaller interface and less duplicated code.
Motivation:
In latest refeactoring we failed to cleanup imports and also there are some throws declarations which are not needed.
Modifications:
Cleanup imports and throws declarations
Result:
Cleaner code.
Motivation:
There are few duplicated byte[] CRLF fields in code.
Modifications:
Removed duplicated fields as they could be inherited from parent encoder.
Result:
Less static fields.
Motivation:
We used incorrect assumeTrue(...) checks.
Modifications:
Fix check.
Result:
Be able to run tests also if java.nio.DirectByteBuffer.<init>(long, int) could not be accessed.
Motivation:
According to the Oracle documentation:
> java.net.preferIPv4Stack (default: false)
>
> If IPv6 is available on the operating system, the underlying native
> socket will be an IPv6 socket. This allows Java applications to connect
> to, and accept connections from, both IPv4 and IPv6 hosts.
>
> If an application has a preference to only use IPv4 sockets, then this
> property can be set to true. The implication is that the application
> will not be able to communicate with IPv6 hosts.
which means, if DnsNameResolver returns an IPv6 address, a user (or
Netty) will not be able to connect to it.
Modifications:
- Move the code that retrieves java.net.prefer* properties from
DnsNameResolver to NetUtil
- Add NetUtil.isIpV6AddressesPreferred()
- Revise the API documentation of NetUtil.isIpV*Preferred()
- Set the default resolveAddressTypes to IPv4 only when
NetUtil.isIpv4StackPreferred() returns true
Result:
- Fixes#5657
Motivation:
The log4j2 project has released version 2.6.2, a bug fix release of
log4j2.
Modifications:
The commit upgrades the log4j2 dependency by modifying the
log4j2.version property in the parent POM to contain version 2.6.2.
Result:
The log4j2 dependency is upgraded to version 2.6.2.
Motiviation:
Preparing platform dependent code for using unsafe requires executing
privileged code. The privileged code for initializing unsafe is executed
in a manner that would require all code leading up to the initialization
to have the requisite permissions. Yet, in a restrictive environment
(e.g., under a security policy that only grants the requisite
permissions the Netty common jar but not to application code triggering
the Netty initialization), then initializing unsafe will not succeed
even if the security policy would otherwise permit it.
Modifications:
This commit marks the necessary blocks as privileged. This enables
access to the necessary resources for initialization unsafe. The idea is
that we are saying the Netty code is trusted, and as long as the Netty
code has been granted the necessary permissions, then we will allow the
caller access to these resources even though the caller itself might not
have the requisite permissions.
Result:
Unsafe can be initialized in a restrictive security environment.
Motivation:
Its not clear that the capacity is per thread.
Modifications:
Rename system property to make it more clear that the recycler capacity is per thread.
Result:
Less confusing.
Motivation:
Instrumenting the NIO selector implementation requires special
permissions. Yet, the code for performing this instrumentation is
executed in a manner that would require all code leading up to the
initialization to have the requisite permissions. In a restrictive
environment (e.g., under a security policy that only grants the
requisite permissions the Netty transport jar but not to application
code triggering the Netty initialization), then instrumeting the
selector will not succeed even if the security policy would otherwise
permit it.
Modifications:
This commit marks the necessary blocks as privileged. This enables
access to the necessary resources for instrumenting the selector. The
idea is that we are saying the Netty code is trusted, and as long as the
Netty code has been granted the necessary permissions, then we will
allow the caller access to these resources even though the caller itself
might not have the requisite permissions.
Result:
The selector can be instrumented in a restrictive security environment.
Motivation:
Writing to a system property requires permissions. Yet the code for
setting sun.nio.ch.bugLevel is not marked as privileged. In a
restrictive environment (e.g., under a security policy that only grants
the requisite permissions the Netty transport jar but not to application
code triggering the Netty initialization), writing to this system
property will not succeed even if the security policy would otherwise
permit it.
Modifications:
This commt marks the necessary code block as privileged. This enables
writing to this system property. The idea is that we are saying the
Netty code is trusted, and as long as the Netty code has been granted
the necessary permissions, then we will allow the caller access to these
resources even though the caller itself might not have the requisite
permissions.
Result:
The system property sun.nio.ch.bugLevel can be written to in a
restrictive security environment.
Motivation:
The SCTP_INIT_MAXSTREAMS property is ignored on NioSctpServerChannel / OioSctpServerChannel.
Modifications:
- Correctly use the netty ChannelOption
- Ensure getOption(...) works
- Add testcase.
Result:
SCTP_INIT_MAXSTREAMS works.
Conflicts:
transport-sctp/src/main/java/io/netty/channel/sctp/DefaultSctpServerChannelConfig.java
Motivation:
We not need to do an extra conditional check in retain(...) as we can just check for overflow after we did the increment.
Modifications:
- Remove extra conditional check
- Add test code.
Result:
One conditional check less.
Motivation:
OpenSslEngine and OpenSslContext currently rely on finalizers to ensure that native resources are cleaned up. Finalizers require the GC to do extra work, and this extra work can be avoided if the user instead takes responsibility of releasing the native resources.
Modifications:
- Make a base class for OpenSslENgine and OpenSslContext which does not have a finalizer but instead implements ReferenceCounted. If this engine is inserted into the pipeline it will be released by the SslHandler
- Add a new SslProvider which can be used to enable this new feature
Result:
Users can opt-in to a finalizer free OpenSslEngine and OpenSslContext.
Fixes https://github.com/netty/netty/issues/4958
Motivation:
AbstractReferenceCountedByteBuf as independent conditional statements to check the bounds of the retain IllegalReferenceCountException condition. One of the exceptions also uses the incorrect increment. The same fix was done for AbstractReferenceCounted as 01523e7835.
Modifications:
- Combined independent conditional checks into 1 where possible
- Correct IllegalReferenceCountException with incorrect increment
- Remove the subtract to check for overflow and re-use the addition and check for overflow to remove 1 arithmetic operation (see http://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.18.2)
Result:
AbstractReferenceCountedByteBuf has less independent branch statements and more correct IllegalReferenceCountException. Compilation size of AbstractReferenceCountedByteBuf.retain() is reduced.
Motivation:
If the user uses 0 as quiet period we should shutdown without any delay if possible.
Modifications:
Ensure we not introduce extra delay when a shutdown quit period of 0 is used.
Result:
EventLoop shutdown as fast as expected.