Motivation:
JCTools 2.0.2 provides an unbounded MPSC linked queue. Before we shaded JCTools we had our own unbounded MPSC linked queue and used it in various places but gave this up because there was no public equivalent available in JCTools at the time.
Modifications:
- Use JCTool's MPSC linked queue when no upper bound is specified
Result:
Fixes https://github.com/netty/netty/issues/5951
Motivation:
Docker's `--tmpfs` flag mounts the temp volume with `noexec` by default,
resulting in an UnsatisfiedLinkError. While this is good security
practice, it is a surprising failure from a seemingly innocuous flag.
Modifications:
Add a best-effort attempt in `NativeLibraryLoader` to detect when temp
files beng loaded cannot be executed even when execution permissions
are set, often because the `noexec` flag is set on the volume.
Requires numerous additional exclusions to the Animal Sniffer config
for Java7 POSIX permissions manipulation.
Result:
Fixes [#6678].
Motivation:
As we now include native code for multiple platforms we need to generate an uber all jar before release it from the staging repository. For this the uber-staging profile can be used. To create a snapshot uber jar the uber-snapshot profile can be used.
Modifications:
- Add uber-staging and uber-snapshot profile
- Correct comment in pom.xml file to show usage.
Result:
Easier to create snapshot and release uber jars.
Motivation:
To ensure the release plugin works correctly we need to ensure all modules are included during build.
Modification:
- Include all modules
- Skip compilation and tests for native code when not supported but still include the module and build the jar
Result:
Build and release works again
Motivation:
When adding SNIMatcher support we missed to use static delegating methods and so may try to load classes that not exists in Java7. Which will lead to errors.
Modifications:
- Correctly only try to load classes when running on java8+
- Ensure Java8+ related tests only run when using java8+
Result:
Fixes [#6700]
Motivation:
We currently don't have a native transport which supports kqueue https://www.freebsd.org/cgi/man.cgi?query=kqueue&sektion=2. This can be useful for BSD systems such as MacOS to take advantage of native features, and provide feature parity with the Linux native transport.
Modifications:
- Make a new transport-native-unix-common module with all the java classes and JNI code for generic unix items. This module will build a static library for each unix platform, and included in the dynamic libraries used for JNI (e.g. transport-native-epoll, and eventually kqueue).
- Make a new transport-native-unix-common-tests module where the tests for the transport-native-unix-common module will live. This is so each unix platform can inherit from these test and ensure they pass.
- Add a new transport-native-kqueue module which uses JNI to directly interact with kqueue
Result:
JNI support for kqueue.
Fixes https://github.com/netty/netty/issues/2448
Fixes https://github.com/netty/netty/issues/4231
Motivation:
There needs to be some work be done to allow using forbidden API check plugin when using java9.
Modifications:
Skip forbidden API check when using java9
Result:
Builds again with java9
Motivation:
In cases when an application is running in a container or is otherwise
constrained to the number of processors that it is using, the JVM
invocation Runtime#availableProcessors will not return the constrained
value but rather the number of processors available to the virtual
machine. Netty uses this number in sizing various resources.
Additionally, some applications will constrain the number of threads
that they are using independenly of the number of processors available
on the system. Thus, applications should have a way to globally
configure the number of processors.
Modifications:
Rather than invoking Runtime#availableProcessors, Netty should rely on a
method that enables configuration when the JVM is started or by the
application. This commit exposes a new class NettyRuntime for enabling
such configuraiton. This value can only be set once. Its default value
is Runtime#availableProcessors so that there is no visible change to
existing applications, but enables configuring either a system property
or configuring during application startup (e.g., based on settings used
to configure the application).
Additionally, we introduce the usage of forbidden-apis to prevent future
uses of Runtime#availableProcessors from creeping. Future work should
enable the bundled signatures and clean up uses of deprecated and
other forbidden methods.
Result:
Netty can be configured to not use the underlying number of processors,
but rather the constrained number of processors.
https://github.com/netty/netty-tcnative/pull/215
Motivation
OCSP stapling (formally known as TLS Certificate Status Request extension) is alternative approach for checking the revocation status of X.509 Certificates. Servers can preemptively fetch the OCSP response from the CA's responder, cache it for some period of time, and pass it along during (a.k.a. staple) the TLS handshake. The client no longer has to reach out on its own to the CA to check the validity of a cetitficate. Some of the key benefits are:
1) Speed. The client doesn't have to crosscheck the certificate.
2) Efficiency. The Internet is no longer DDoS'ing the CA's OCSP responder servers.
3) Safety. Less operational dependence on the CA. Certificate owners can sustain short CA outages.
4) Privacy. The CA can lo longer track the users of a certificate.
https://en.wikipedia.org/wiki/OCSP_staplinghttps://letsencrypt.org/2016/10/24/squarespace-ocsp-impl.html
Modifications
https://www.openssl.org/docs/man1.0.2/ssl/SSL_set_tlsext_status_type.html
Result
High-level API to enable OCSP stapling
Motivation:
In OpenSslCertificateException we tried to validate the supplied error code but did not correctly account for all different valid error codes and so threw an IllegalArgumentException.
Modifications:
- Fix validation by updating to latest netty-tcnative and use CertificateVerifier.isValid
- Add unit tests
Result:
Validation of error code works as expected.
Motivation:
Conscrypt is a Java Security provider that wraps OpenSSL (specifically BoringSSL). It's a possible alternative to Netty-tcnative that we should explore. So this commit is just to enable us to further investigate its use.
Modifications:
Modifying the SslContext creation path to support the Conscrypt provider.
Result:
Netty will support OpenSSL with conscrypt.
Motivation:
Projects may import multiple libraries which use different versions of Netty.
Modifications:
Add 'netty-bom' meta-project that contains the other projects in a dependencyManagement section.
Result:
Developers can import the BOM to enforce specific version of Netty.
Motivation:
We not support all SSLParameters settings so we should better throw if a user try to use them.
Modifications:
- Check for unsupported parameters
- Add unit test
Result:
Less surprising behavior.
Motivation:
We should move the AutobahnTestsuite to an extra module. This allows easier to run only the testsuite or only the autobahntestsuite
Modifications:
Create a new module (testsuite-autobahn)
Result:
Better project structure.
Motivation:
autobahntestsuite-maven-plugin 0.1.4 was released and supports Java9.
Modifications:
Update plugin to be able to run tests on Java9
Result:
Autobahntestsuite can also be run on Java9.
Motivation:
OpenSSL doesn't automatically verify hostnames and requires extract method calls to enable this feature [1]. We should allow this to be configured.
Modifications:
- SSLParamaters#getEndpointIdentificationAlgorithm() should be respected and configured via tcnative interfaces.
Result:
OpenSslEngine respects hostname verification.
[1] https://wiki.openssl.org/index.php/Hostname_validation
Motivation:
We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7
Modification:
Using ThreadLocalRandom implementation of the JDK when possible.
Result:
Make use of JDK implementations when possible.
Motivation:
We missed some stuff in 5728e0eb2c and so the build failed on java9
Modifications:
- Add extra cmdline args when needed
- skip the autobahntestsuite as jython not works with java9
- skip the osgi testsuite as the maven plugin not works with java9
Result:
Build finally passed on java9
Motivation:
Commit 591293bfb4 changed the build to need java8 but missed to adjust the enforce rule as well.
Modifications:
Enforce java8+
Result:
Quickly fail when user tries to compile with pre java8
Motivation:
Java8 is out now for some time and JDK7 is no longer supported officially. We should remove all our backports and just use what the JDK provides us. This also will allow us to use intrinsics that are offered by the JDK implementations.
Modifications:
Remove all backports of jdk8 classes.
Result:
Use what the JDK offers us. This also fixes [#5458]
Motivation:
tcnative was moved into an internal package.
Modifications:
Update package for tcnative imports.
Result:
Use correct package names for tcnative.
Motivation:
We need to pass special arguments to run with jdk9 as otherwise some tests will not be able to run.
Modifications:
Allow to define extra arguments when running with jdk9
Result:
Tests pass with jdk9
Motivation:
As we now need to compile with java8 we should still allow to run the tests with a different java version to ensure everythin also works with java 7 and 6.
Modifications:
Allow to pass "-DtestJavaHome" to the build and so use a different java version during running the tests.
Result:
Be able to run tests with different java versions.
Motivation:
tcnative has updated how constants are defined and removed some constants which are either obsolete or now set directly in tcnative.
Modifications:
- update to compile against tcnative changes.
Result:
Netty compiles with tcnative options changes.
Motivation:
Currently Netty utilizes BIO_new_bio_pair so we can control all FD lifetime and event notification but delegates to OpenSSL for encryption/decryption. The current implementation sets up a pair of BIO buffers to read/write encrypted/plaintext data. This approach requires copying of data from Java ByteBuffers to native memory BIO buffers, and also requires both BIO buffers to be sufficiently large to hold application data. If direct ByteBuffers are used we can avoid coyping to/from the intermediate BIO buffer and just read/write directly from the direct ByteBuffer memory. We still need an internal buffer because OpenSSL may generate write data as a result of read calls (e.g. handshake, alerts, renegotiation, etc..), but this buffer doesn't have to be be large enough to hold application data.
Modifications:
- Take advantage of the new ByteBuffer based BIO provided by netty-tcnative instead of using BIO_read and BIO_write.
Result:
Less copying and lower memory footprint requirement per TLS connection.
Motivation:
We used various mocking frameworks. We should only use one...
Modifications:
Make usage of mocking framework consistent by only using Mockito.
Result:
Less dependencies and more consistent mocking usage.
Motivation:
Previous versions of netty-tcnative used the org.apache.tomcat namespace which could lead to problems when a user tried to use tomcat and netty in the same app.
Modifications:
Use netty-tcnative which now uses a different namespace and adjust code to some API changes.
Result:
Its now possible to use netty-tcnative even when running together with tomcat.
Motivation:
We need to update jetty-alpn-agent to support latest JDK releases.
Modifications:
Update jetty-alpn-agent to 2.0.6
Result:
Be able to run tests with latest JDK releases.
Motivation:
When an empty hostname is used in DnsNameResolver.resolve*(...) it will never notify the future / promise. The root cause is that we not correctly guard against errors of IDN.toASCII(...) which will throw an IllegalArgumentException when it can not parse its input. That said we should also handle an empty hostname the same way as the JDK does and just use "localhost" when this happens.
Modifications:
- If the try to resolve an empty hostname we use localhost
- Correctly guard against errors raised by IDN.toASCII(...) so we will always noify the future / promise
- Add unit test.
Result:
DnsNameResolver.resolve*(...) will always notify the future.
Motivation:
Openssl provider should behave same as JDK provider when mutual authentication is required and a specific set of trusted Certificate Authorities are specified. The SSL handshake should return back to the connected peer the same list of configured Certificate Authorities.
Modifications:
Correctly set the CA list.
Result:
Correct and same behaviour as the JDK implementation.
Motivation:
Often its useful to run the tests with different commandline arguments (like different system properties).
Modifications:
Introduce argLine.javaProperties which can be set from the commandline as well to add arguments that should be append when run the unit tests.
Result:
More flexible way to run the tests.
Motivation:
Java9 will be released soon so we should ensure we can compile netty with Java9 and run all our tests. This will help to make sure Netty will be usable with Java9.
Modification:
- Add some workarounds to be able to compile with Java9, note that the full profile is not supported with Java9 atm.
- Remove some usage of internal APIs to be able to compile on java9
- Not support Alpn / Npn and so not run the tests when using Java9 for now. We will do a follow up PR to add support.
Result:
Its possible to build netty and run its testsuite with Java9.
Motivation:
We tried to detect the correct alert to use depending on the CertificateException that is thrown by the TrustManager. This not worked all the time as depending on the TrustManager implementation it may also wrap a CertPathValidatorException.
Modification:
- Try to unwrap the CertificateException if needed and detect the right alert via the CertPathValidatorException.
- Add unit to verify
Result:
Send the correct alert depending on the CertificateException when using OpenSslEngine.
Motivation:
A new version of centos was released we should verify against it when release.
Modifications:
Bump up version.
Result:
Release on latest centos version.
Motivation:
Add test-case for doing mutal auth with a certificate chain that holds more then one certificate.
Modifications:
Add test case
Result:
more tests.
Motivation:
For the leak profile we attempted to increase the number of leak hints that were retained to make debugging easier, but there was a typo.
Modifications:
- maxRecord -> maxRecords
Result:
Fix typo in pom.xml so leak profile provides more context for leaks.
Motivation:
Since netty shaded JCTools the OSGi manifest no longer is correct. It claims to
have an optional import "org.jctools.queues;resolution:=optional,org.jctools.qu
eues.atomic;resolution:=optional,org.jctools.util;resolution:=optional"
However since it is shaded, this is no longer true.
This was noticed when making JCTools a real bundle and netty resolved it as
optional import.
Modifications:
Modify the generated manifest by no longer analyzing org.jctools for imports.
A manual setting of sun.misc as optional was required.
Result:
Netty OSGi bundle will no longer interfere with a JCTools bundle.
Motivation:
We need to ensure we not set duplicated certificates when using OpenSslEngine.
Modifications:
- Skip first cert in chain when set the chain itself and so not send duplicated certificates
- Add interopt unit tests to ensure no duplicates are send.
Result:
No more duplicates.
Motiviation:
Previously the way how CertificateRequestCallback was working had some issues which could cause memory leaks and segfaults. Due of this tcnative code was updated to change the signature of the method provided by the interface.
Modifications:
Update CertificateRequestCallback implementations to match new interface signature.
Result:
No more segfaults / memory leaks when using boringssl or openssl >= 1.1.0
Motivation:
We use often javachannel().socket().* in NIO as these methods exists in java6. The problem is that these will throw often very general Exceptions (Like SocketException) while it is more expected to throw the Exceptions listed in the nio interfaces. When possible we should use the new methods available in java7+ which throw the correct exceptions.
Modifications:
Check for java version and depending on it using the socket or the javachannel.
Result:
Throw expected Exceptions.
Motivation:
netty-tcnative API has changed to remove a feature that contributed to a memory leak.
Modifications:
- Update to use the modified netty-tcnative API
Result:
Netty can use the latest netty-tcnative.
Motivation:
The log4j2 project has released version 2.6.2, a bug fix release of
log4j2.
Modifications:
The commit upgrades the log4j2 dependency by modifying the
log4j2.version property in the parent POM to contain version 2.6.2.
Result:
The log4j2 dependency is upgraded to version 2.6.2.
Motivation:
When the user constructs Lz4FrameDecoder with a Checksum implementation like CRC32 or Adler32 and uses Java8 we can directly use a ByteBuffer to do the checksum work. This way we can eliminate memory copies.
Modifications:
Detect if ByteBuffer can be used for checksum work and if so reduce memory copies.
Result:
Less memory copies when using JDK8
Motivation:
We recently added support for session ticket statistics which we can expose now.
Modifications:
Expose the statistics.
Result:
Be able to obtain session ticket statistics.
Motivation:
It is good to have used dependencies and plugins up-to-date to fix any undiscovered bug fixed by the authors.
Modification:
Scanned dependencies and plugins and carefully updated one by one.
Result:
Dependencies and plugins are up-to-date.
Motivation:
When the user home and so the path to the local maven repository contains spaces it currently fails to run the tests (at least on windows).
Modifications:
Put double quotes around the ${settings.localRepository}
Result:
Be able to run build and tests even when user home path has spaces in it.
Motivation:
JCTools supports both non-unsafe, unsafe versions of queues and JDK6 which allows us to shade the library in netty-common allowing it to stay "zero dependency".
Modifications:
- Remove copy paste JCTools code and shade the library (dependencies that are shaded should be removed from the <dependencies> section of the generated POM).
- Remove usage of OneTimeTask and remove it all together.
Result:
Less code to maintain and easier to update JCTools and less GC pressure as the queue implementation nt creates so much garbage
Motivation:
Currently the default log level when running tests is debug. When
running the build on the CI server it might be nice to avoid this debug
level and allow for the level to be configured.
Modifications:
Added a logback-test.xml configuration that has been added to the
common module. This allows for the logLevel to be configured.
The default level will still be debug.
Result:
The log level can now be configured from the command line:
$ mvn test -DlogLevel=error
Motivation:
Before release 4.1.0.Final we should update all our dependencies.
Modifications:
Update dependencies.
Result:
Up-to-date dependencies used.
Motivation:
As we only provide tcnative jars for 64bit we should enforce 64bit when try to build netty, to make it easier for the user to understand why the build fails.
Modifications:
Add enforce rule.
Result:
Ensure 64bit is used when build netty.
Motivation:
When writing a SMTP client a provided SMTP codec that follows RFC2821 is useful.
Modification:
Add client side codec and test.
Results:
People who want to write a SMTP client can reuse the codec.
Motivation:
See #3095
Modifications:
Add Log4J2LoggerFactory and Log4J2Logger which is an InternalLogger implementation based on log4j2.
Result:
The user can use log4j2 directly without a special slf4j binding.
Motivation:
We should upgrade to latest netty-tcnative version.
Modifications:
Upgrade to version 1.1.33.Fork15
Result:
Latest netty-tcnative version is used.
Motivation:
See #3172
Modifications:
https://github.com/netty/netty-build/pull/6 added a junit timeout listener to the netty-build project. This patch just set it up.
Result:
If a test is set the timeout parameter using junit's @Test(timeout = ...) and the timeout is triggered, a full stack trace dump will be outputted and also output the deadlocks if any.
Motivation:
See https://github.com/netty/netty-build/issues/5
Modifications:
Add xml-maven-plugin to check indentation and fix violations
Result:
pom.xml will be checked in the PR build
Motivation:
Build warning message is shown when building netty project. Due to missing paxexam version below[1] warning message is shown when building netty project.
Modifications:
Added paxexam plugin version into root pom
Result:
paxexam related warning will not be displayed when building the project.
[1]. #4909
Motivation:
Depending on the actual CertificateException we should set the correct alert type so it will be sent back to the remote peer and so make it easier for them to fix it.
Modification:
Correctly set the alert and not always just use a general alert.
Result:
It's easier for the remote peer to fix the problems.
Motivation:
netty-tcnative-1.1.33.Fork was released, we should upgrade. Also we should skip renegotiate tests if boringssl is used because boringssl does not support renegotiation.
Modifications:
- Upgrade to netty-tcnative-1.1.33.Fork13
- Skip renegotiate tests if boringssl is used.
Result:
Use newest version of netty-tcnative and be able to build if boringssl is used.
Motivation:
ResourceLeakDetector enforces a limit as to how large the queue is allowed to grow for stack traces in order to keep memory from growing too large. However it is not always clear when records are dropped, or how many have been dropped. This can make interpreting leak reports more difficult if you assume all information is present when it may not be. Also we should increase the limit (currently 4) when running tests on the CI servers.
Modifications:
- Increase leak detector record limit on CI servers from 4 to 32.
- Track how many records have been discarded and disclose this in the leak report.
Result:
Leak reports clarify how many records were dropped, and how to increase the limit.
Motivation:
We had reports of failures before when sun.misc.Unsafe was not present. We should run our tests also with it disable to ensure everything works even if sun.misc.Unsafe is not present on the system.
Modifications:
Add a new profile which allows to run tests without Unsafe (using -PnoUnsafe)
Result:
Better testing of netty for systems where sun.misc.Unsafe is not present.
Motivation:
Builds fail with java 1.8.0_72 because jetty-alpn-boot has absorbed new code from openjdk and older version are now incompatible.
Modifications:
- Updated jetty-alpn-agent version
Result:
We can now build/develop using java 1.8.0_72
Motivation:
When using SslProvider.OPENSSL we currently not handle SNI on the client side.
Modifications:
Correctly enable SNI when using clientMode and peerHost != null.
Result:
SNI works even with SslProvider.OPENSSL.
Motivation:
As we now can easily build static linked versions of tcnative it makes sense to run our netty build against all of them.
This helps to ensure our code works with libressl, openssl and boringssl.
Modifications:
Allow to specify -Dtcnative.artifactId= and -Dtcnative.version=
Result:
Easy to run netty build against different tcnative flavors.
Motivation:
Attempts to enable SSL protocols which are currently disabled fail when using the OpenSslEngine. Related to https://github.com/netty/netty/issues/4736
Modifications:
Clear out all options that have disabled SSL protocols before attempting to enable any SSL protocol.
Result:
setEnabledProtocols works as expected.
Motivation:
Netty was missing support for Protobuf nano runtime targeted at
weaker systems such as Android devices.
Modifications:
Added ProtobufDecoderNano and ProtobufDecoderNano
in order to provide support for Nano runtime.
modified ProtobufVarint32FrameDecoder and
ProtobufLengthFieldPrepender in order to remove any
on either Nano or Lite runtime by copying the code
for handling Protobuf varint32 in from Protobuf
library.
modified Licenses and NOTICE in order to reflect the
changes i made.
added Protobuf Nano runtime as optional dependency
Result:
Netty now supports Protobuf Nano runtime.
Motivation:
Parent pom.xml uses deprecated maven expressions, such as `${groupId}`
which should be ${project.groupId}.
This causes tons of warnings on every module in the build.
Modifications:
Use up to date syntax.
Result:
No more maven warnings.
Motivation:
We had to add a new profile for each OpenJDK/OracleJDK release to make
Maven choose the correct alpn-boot.jar and npn-boot.jar. As a result,
our pom.xml has a large number of `<profile/>` sections.
Modifications:
- Use jetty-alpn-agent, which chooses the correct alpn-boot.jar and
npn-boot.jar automatically to remove all the nasty profile sections
from pom.xml
- Visit https://github.com/trustin/jetty-alpn-agent for more info
Result:
Cleaner pom.xml
Motivation:
UTF-16 can not represent the full range of Unicode characters, and thus has the concept of Surrogate Pair (http://unicode.org/glossary/#surrogate_pair) where 2 16-bit code units can be used to represent the missing characters. ByteBufUtil.writeUtf8 is currently does not support this and is thus incomplete.
Modifications:
- Add support for surrogate pairs in ByteBufUtil.writeUtf8
Result:
ByteBufUtil.writeUtf8 now supports surrogate pairs and is correctly converting to UTF-8.
Motivation:
PriorityStreamByteDistributor uses a homegrown algorithm which distributes bytes to nodes in the priority tree. PriorityStreamByteDistributor has no concept of goodput which may result in poor utilization of network resources. PriorityStreamByteDistributor also has performance issues related to the tree traversal approach and number of nodes that must be visited. There also exists some more proven algorithms from the resource scheduling domain which PriorityStreamByteDistributor does not employ.
Modifications:
- Introduce a new ByteDistributor which uses elements from weighted fair queue schedulers
Result:
StreamByteDistributor which is sensitive to priority and uses a more familiar distribution concept.
Fixes https://github.com/netty/netty/issues/4462
Motiviation:
According to jetty docs the alpn-api should use the provided scope.
Modificaitons:
- change scope to provided for alpn-api
- update for new jdk
Result:
Users of Netty don't run into alpn version conflicts.
Fixes https://github.com/netty/netty/issues/4480
Motivation:
6.7 is the latest stable release in RHEL/CentOS 6 line. Given that most
RHEL/CentOS users have upgraded to 6.7 via yum upgrade, we should bump
our requirement.
Modification:
s/6.6/6.7/g
Result:
'mvn release:*' must be run on RHEL/CentOS 6.7 instead of 6.6.
Motivation:
The twitter hpack project does not have the support that it used to have. See discussion here: https://github.com/netty/netty/issues/4403.
Modifications:
Created a new module in Netty and copied the latest from twitter hpack master.
Result:
Netty no longer depends on twitter hpack.
Motivation:
A new version of ALPN boot has been released.
Modifications:
- Update the pom to pull in this new version
Result:
New JDK get new ALPN boot.
Motivation:
The SSLSession allows to invalidate a SSLSession and so disallow resume of a session. We should support this for OpenSSLEngine as well.
Modifications:
- Correctly implement SSLSession.isValid() and invalidate() in OpenSSLEngine
- Add unit test.
Result:
Invalidate of SSL sessions is supported when using OpenSSL now.
Motivation:
As relaying on external DNS Server can result to test-failures we should better use a mock DNS Server for the dns tests.
Modifications:
- Refactor the DnsNameResolverTest to use a mock DNS Server which is using apacheds.
- Allow to disable adding an opt resources as some servers not support it.
Result:
More stable testsuite.
Motivation:
JDK SslEngine supports renegotion, so we should at least support it server-side with OpenSslEngine as well.
That said OpenSsl does not support sending messages asynchronly while the renegotiation is still in progress, so the application need to ensure there are not writes going on while the renegotiation takes place. See also https://rt.openssl.org/Ticket/Display.html?id=1019 .
Modifications:
- Add support for renegotiation when OpenSslEngine is used in server mode
- Add unit tests.
- Upgrade to netty-tcnative 1.1.33.Fork9
Result:
Better compatibility with the JDK SSLEngine implementation.
Motivation:
A new version of netty-tcnative was released with some important bug-fixes.
Modifications:
Bump up version.
Result:
Using latest netty-tcnative version
Motivation:
The last os-maven-plugin had a bug that sometimes missed to correctly detect fedora based linux.
Modifications:
Upgrade to 1.4.1
Result:
Correctly detect on all fedora based linux.
Motivation:
The latest netty-tcnative fixes a bug in determining the version of the runtime openssl lib. It also publishes an artificact with the classifier linux-<arch>-fedora for fedora-based systems.
Modifications:
Modified the build files to use the "-fedora" classifier when appropriate for tcnative. Care is taken, however, to not change the classifier for the native epoll transport.
Result:
Netty is updated the the new shiny netty-tcnative.
Motivation:
https://github.com/twitter/hpack released version v1.0.1.
Modifications:
- Update pom files to pull in new version
Results:
Depend on the most recent hpack library.
Motivation:
The alpn / npn dependency versions are dependent on java version. If a java version 1.8+ is used that is not explicitly listed in the pom file then ALPN tests will fail because the java 1.7 version of alpn will be loaded by out pom file.
Modifications:
- Ensure there is a latest version to fall back up for npn 1.7+
- Ensure there is a latest version to fall back upon from alpn 1.8+
Result:
Build can complete despite having a newer jdk which is not listed in our pom file.
Related: #3886
Motivation:
We were including OSGi manifests in sources/javadoc JARs, and OSGi
container treats them as correct dependencies when resolving from OBR
repository, which is incorrect. Runtime fails with non-descriptive
ClassNotFoundException as a result.
Modifications:
- Do not include the OSGi manifests in sources/javadoc JARs
- Include Eclipse-related manifest entries in sources/javadoc JARs
Result:
Better OSGi compatibility
Motivation:
New versions of alpn-boot and npn-boot have been released.
Modifications:
- Update pom to pull in new versions.
Result:
Dependencies more up to date.
Motivation:
Sometimes the user already has a PrivateKey / X509Certificate which should be used to create a new SslContext. At the moment we only allow to construct it via Files.
Modifications:
- Add new methods to the SslContextBuilder to allow creating a SslContext from PrivateKey / X509Certificate
- Mark all public constructors of *SslContext as @Deprecated, the user should use SslContextBuilder
- Update tests to us SslContextBuilder.
Result:
Creating of SslContext is possible with PrivateKay/X509Certificate
Motivation:
We used ERR_get_error() to detect errors and missed to handle different errors. Also we missed to clear the error queue for a thread before invoke SSL operations,
this could lead to detecting errors on different OpenSslEngines then the one in which the error actual happened.
Modifications:
Explicit handle errors via SSL.get_error and clear the error code before SSL operations.
Result:
Correctly handle errors and no false-positives in different OpenSslEngines then the one which detected an error.
Motivation:
When a faulty never-ending test keeps producing a lot of garbage doing
nothing but generating CPU load, our CI fails to detect the stalled
build, because it determines the 'inactivity time' from console
activity and GC keeps producing console output.
Modifications:
Remove the -verbose:gc flag from pom.xml
Result:
Stalled builds are terminated by our CI server.
Motivation:
Discussion is in https://github.com/jetty-project/jetty-alpn/issues/8. The new API allows protocol negotiation to properly throw SSLHandshakeException.
Modifications:
Updated the parent pom.xml with the new version.
Result:
Upgraded alpn-api now allows throwing SSLHandshakeException.
Motivation:
At the moment hostname verification is not supported with OpenSSLEngine.
Modifications:
- Allow to create OpenSslEngine with peerHost and peerPort informations.
- Respect endPointIdentificationAlgorithm and algorithmConstraints when set and get SSLParamaters.
Result:
hostname verification is supported now.
Motivation:
New versions of compression libraries, which improve their performance and fix some bugs.
Modifications:
Updated versions of jzlib, compress-lzf, lz4 and commons-compres libraries.
Result:
Better stability and performance of compression codecs.
Motivation:
To prevent from DOS attacks it can be useful to disable remote initiated renegotiation.
Modifications:
Add new flag to OpenSslContext that can be used to disable it
Adding a testcase
Result:
Remote initiated renegotion requests can be disabled now.
Modifications:
- Add jetty.npn.version.latest and jetty.alpn.version.latest7/8
- Add npn-alpn-7 profile
- Use the *.latest7/8 version properties in alpn-8 and npn-alpn-7
- Add more profiles for newer JDK versions
- Reorder profiles
Motivation:
Right now the used hpack dependency does not contain a valid osgi manifest.
Modifications:
Upgrade hpack from 0.10.1 to 0.11.0.
Result:
hpack dependency works in osgi containers without wrapping.
Motivation:
Many projects need some kind a Channel/Connection pool implementation. While the protocols are different many things can be shared, so we should provide a generic API and implementation.
Modifications:
Add ChannelPool / ChannelPoolMap API and implementations.
Result:
Reusable / Generic pool implementation that users can use.
Motivation:
To support HTTP2 we need APLN support. This was not provided before when using OpenSslEngine, so SSLEngine (JDK one) was the only bet.
Beside this CipherSuiteFilter was not supported
Modifications:
- Upgrade netty-tcnative and make use of new features to support ALPN and NPN in server and client mode.
- Guard against segfaults after the ssl pointer is freed
- support correctly different failure behaviours
- add support for CipherSuiteFilter
Result:
Be able to use OpenSslEngine for ALPN / NPN for server and client.
Motivation:
Too many duplicated code of tests for different compression codecs.
Modifications:
- Added abstract classes AbstractCompressionTest, AbstractDecoderTest and AbstractEncoderTest which contains common variables and tests for any compression codec.
- Removed common tests which are implemented in AbstractDecoderTest and AbstractEncoderTest from current tests for compression codecs.
- Implemented abstract methods of AbstractDecoderTest and AbstractEncoderTest in current tests for compression codecs.
- Added additional checks for current tests.
- Renamed abstract class IntegrationTest to AbstractIntegrationTest.
- Used Theories to run tests with head and direct buffers.
- Removed code duplicates.
Result:
Removed duplicated code of tests for compression codecs and simplified an addition of tests for new compression codecs.
Motivation:
For some use cases X509ExtendedTrustManager is needed as it allows to also access the SslEngine during validation.
Modifications:
Add support for X509ExtendedTrustManager on java >= 7
Result:
It's now possible to use X509ExtendedTrustManager with OpenSslEngine
Motivations:
JDK 1.8 adds default methods to collections classes that reference
classes that don't exist in JDK 7. That's binary compatible,
but not source compatible.
Modifications:
Enforce JDK version to be 1.7.* when releasing
Result:
Fixes#3548
Motivation:
There are new versions of the ALPN and NPN dependencies. There was also some backport misses in the pom file related to ALPN/NPN.
Modifications:
- Add new versions for ALPN/NPN dependencies.
- Backport missed pieces from pom.xml.
Result:
Updated version of ALPN/NPN versions.
Motivation:
Twitter hpack has upgraded to 0.10.1 to fix a parsing bug.
Modifications:
Updated the parent pom to specify the dependency version.
Result:
HTTP/2 updated to the latest hpack release.
Motivation:
Provide non-blocking XML parser as Netty codec.
Modifications:
New codec implemented/extracted.
io.netty.handler.codec.xml.XmlDecoder decodes XML fed by Netty without blocking.
Result:
Non-blocking XML stream parsing.
Motivation:
Release 4.0.25 was not usable in OSGi environments due to a simple typo.
An automated test could have caught the problem even before it was
committed.
Modifications:
This patch introduces a new artifact, osgitests, which pulls in all
production artifacts (which we want to be checked for OSGi compliance).
It contains only a single unit test, which runs a pax-exam container
with felix OSGi.
At initialization time, it scans all the artifact's dependencies,
looking for things belonging to io.netty group. The container is
configured to deploy those artifacts as bundles and fail if any bundle
is found to be unresolved. It performs a final check to see if any
bundles were tested this way, to make sure the mechanism is not
completely broken.
We are using wrappedBundle(), as two of our third-party dependencies do
not export packages correctly -- this masks the problem, assuming that
whoever deploys our artifacts depending on them will figure out how to
OSGify them.
Result:
Simple typos and other bundle manifest errors should be caught during
test phase of every build.
Motivation:
HTTP/2 codec was implemented in master branch.
Since, master is not yet stable and will be some time before it gets released, backporting it to 4.1, enables people to use the codec with a stable netty version.
Modification:
The code has been copied from master branch as is, with minor modifications to suit the `ChannelHandler` API in 4.x.
Apart from that change, there are two backward incompatible API changes included, namely,
- Added an abstract method:
`public abstract Map.Entry<CharSequence, CharSequence> forEachEntry(EntryVisitor<CharSequence> visitor)
throws Exception;`
to `HttpHeaders` and implemented the same in `DefaultHttpHeaders` as a delegate to the internal `TextHeader` instance.
- Added a method:
`FullHttpMessage copy(ByteBuf newContent);`
in `FullHttpMessage` with the implementations copied from relevant places in the master branch.
- Added missing abstract method related to setting/adding short values to `HttpHeaders`
Result:
HTTP/2 codec can be used with netty 4.1
Motivation:
The latest stable RHEL version of 6.x is now 6.6.
Modification:
Update pom.xml's validation configuration
Result:
Can release on the latest stable RHEL version in 6.x
Motivation:
We only support openssl for server side at the moment but it would be also useful for client side.
Modification:
* Upgrade to new netty-tcnative snapshot to support client side openssl support
* Add OpenSslClientContext which can be used to create SslEngine for client side usage
* Factor out common logic between OpenSslClientContext and OpenSslServerContent into new abstract base class called OpenSslContext
* Correctly detect handshake failures as soon as possible
* Guard against segfault caused by multiple calls to destroyPools(). This can happen if OpenSslContext throws an exception in the constructor and the finalize() method is called later during GC
Result:
openssl can be used for client and servers now.
Motivation:
ALPN version updates revealed an inconsistency visible by defaulting to npn when alpn was expected.
Modifications:
Default to ALPN.
Result:
Build and unit tests should pass.
Motivation:
There was a bug in the Java ALPN library we are using. A new version was released to fix this bug and we should update our pom.xml to use the new version.
Modifications:
Update pom.xml to use new ALPN library.
Result:
Newer versions of JDK (1.7_u71, 1.7_u72, 1.8_u25) have the bug fixed.
Motivation:
It takes too long to download the heap dump from the CI server.
Modifications:
Compress the heap dump as much as possible.
Result:
When heap dump is generated by certain test failure, the generated heap
dump file is about 3 times smaller than before, although the compression
time will increase the build time when the test fails.
Motivation:
We use 3 (!) libraries to build mock objects - easymock, mockito, jmock.
Mockito and jMock pulls in the different versions of Hamcrest, and it
conflicts with the version pulled by jUnit.
Modifications:
- Replace mockito-all with mockito-core to avoid pulling in outdated
jUnit and Hamcrest
- Exclude junit-dep when pulling in jmock-junit4, because it pulls an
outdated Hamcrest version
- Pull in the hamcrest-library version used by jUnit explicitly
Result:
No more dependency hell that results in NoSuchMethodError during the
tests
Motivation:
Improvements were made on the main line to support ALPN and mutual
authentication for TLS. These should be backported.
Modifications:
- Backport commits from the master branch
- f8af84d599
- e74c8edba3
Result:
Support for ALPN and mutual authentication.
Motivation:
So far, we relied on the domain name resolution mechanism provided by
JDK. It served its purpose very well, but had the following
shortcomings:
- Domain name resolution is performed in a blocking manner.
This becomes a problem when a user has to connect to thousands of
different hosts. e.g. web crawlers
- It is impossible to employ an alternative cache/retry policy.
e.g. lower/upper bound in TTL, round-robin
- It is impossible to employ an alternative name resolution mechanism.
e.g. Zookeeper-based name resolver
Modification:
- Add the resolver API in the new module: netty-resolver
- Implement the DNS-based resolver: netty-resolver-dns
.. which uses netty-codec-dns
- Make ChannelFactory reusable because it's now used by
io.netty.bootstrap, io.netty.resolver.dns, and potentially by other
modules in the future
- Move ChannelFactory from io.netty.bootstrap to io.netty.channel
- Deprecate the old ChannelFactory
- Add ReflectiveChannelFactory
Result:
It is trivial to resolve a large number of domain names asynchronously.
Related issue: #1133
Motivation:
There is no support for client socket connections via a proxy server in
Netty.
Modifications:
- Add a new module 'handler-proxy'
- Add ProxyHandler and its subclasses to support SOCKS 4a/5 and HTTP(S)
proxy connections
- Add a full parameterized test for most scenarios
- Clean up pom.xml
Result:
A user can make an outgoing connection via proxy servers with only
trivial effort.
Motivation:
LZMA compression algorithm has a very good compression ratio.
Modifications:
- Added `lzma-java` library which implements LZMA algorithm.
- Implemented LzmaFrameEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the LzmaFrameEncoder and how it can compress data for the next uncompression using the original library.
Result:
LZMA encoder which can compress data using LZMA algorithm.
Motivation:
LZ4 compression codec provides sending and receiving data encoded by very fast LZ4 algorithm.
Modifications:
- Added `lz4` library which implements LZ4 algorithm.
- Implemented Lz4FramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the Lz4FramedEncoder and how it can compress data for the next uncompression using the original library.
- Implemented Lz4FramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added tests to verify the Lz4FramedDecoder and how it can uncompress data after compression using the original library.
- Added integration tests for Lz4FramedEncoder/Decoder.
Result:
Full LZ4 compression codec which can compress/uncompress data using LZ4 algorithm.
Related issue: #2508
Motivation:
The '<exec/>' task takes unnecessarily long time due to a known issue:
- https://issues.apache.org/bugzilla/show_bug.cgi?id=54128
Modifications:
- Reduce the number of '<exec/>' tasks for faster build
- Use '<propertyregex/>' to extract the output
Result:
Slightly faster build
Motivation:
LZF compression codec provides sending and receiving data encoded by very fast LZF algorithm.
Modifications:
- Added Compress-LZF library which implements LZF algorithm
- Implemented LzfEncoder which extends MessageToByteEncoder and provides compression of outgoing messages
- Added tests to verify the LzfEncoder and how it can compress data for the next uncompression using the original library
- Implemented LzfDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages
- Added tests to verify the LzfDecoder and how it can uncompress data after compression using the original library
- Added integration tests for LzfEncoder/Decoder
Result:
Full LZF compression codec which can compress/uncompress data using LZF algorithm.
MQTT is a open source protocol on top of TCP which is widely used in
mobile communication and also for IoT (Internet of Things) today. This
will add an open source implementation of MQTT so that it becomes easier
for Netty users to implement an MQTT application.
For more information about the MQTT protocol, read this:
http://public.dhe.ibm.com/software/dw/webservices/ws-mqtt/mqtt-v3r1.html
Motivation:
As part of GSOC 2013 we had @mbakkar working on a DNS codec but did not integrate it yet as it needs some cleanup. This commit is based on @mbakkar's work and provide the codec for DNS.
Modifications:
Add DNS codec
Result:
Reusable DNS codec will be included in netty.
This PR also includes a AsynchronousDnsResolver which allows to resolve DNS entries in a non blocking way by make use
of the dns codec and netty transport itself.
Motivation:
maven-antrun-plugin does not redirect stdin, and thus it's impossible to
run interactive examples such as securechat-client and telnet-client.
org.codehaus.mojo:exec-maven-plugin redirects stdin, but it buffers
stdout and stderr, and thus an application output is not flushed timely.
Modifications:
Deploy a forked version of exec-maven-plugin which flushes output
buffers in a timely manner.
Result:
Interactive examples work. Launches faster than maven-antrun-plugin.
Motivation:
The examples have not been updated since long time ago, showing various
issues fixed in this commit.
Modifications:
- Overall simplification to reduce LoC
- Use system properties to get options instead of parsing args.
- Minimize option validation
- Just use System.out/err instead of Logger
- Do not pass config as parameters - just access it directly
- Move the main logic to main(String[]) instead of creating a new
instance meaninglessly
- Update netty-build-21 to make checkstyle not complain
- Remove 'throws Exception' clause if possible
- Line wrap at 120 (previously at 80)
- Add an option to enable SSL for most examples
- Use ChannelFuture.sync() instead of await()
- Use System.out for the actual result. Use System.err otherwise.
- Delete examples that are not very useful:
- applet
- websocket/html5
- websocketx/sslserver
- localecho/multithreaded
- Add run-example.sh which simplifies launching an example from command
line
- Rewrite FileServer example
Result:
Shorter and simpler examples. A user can focus more on what it actually
does than miscellaneous stuff. A user can launch an example very
easily.
Motivation:
exec-maven-plugin does not flush stdout and stderr, making the console
output from the examples invisible to users
Modification:
Use maven-antrun-plugin instead
Result:
A user sees the output from the examples immediately.
Motivation:
- OpenSslEngine and JDK SSLEngine (+ Jetty NPN) have different APIs to
support NextProtoNego extension.
- It is impossible to configure NPN with SslContext when the provider
type is JDK.
Modification:
- Implement NextProtoNego extension by overriding the behavior of
SSLSession.getProtocol() for both OpenSSLEngine and JDK SSLEngine.
- SSLEngine.getProtocol() returns a string delimited by a colon (':')
where the first component is the transport protosol (e.g. TLSv1.2)
and the second component is the name of the application protocol
- Remove the direct reference of Jetty NPN classes from the examples
- Add SslContext.newApplicationProtocolSelector
Result:
- A user can now use both JDK SSLEngine and OpenSslEngine for NPN-based
protocols such as HTTP2 and SPDY
Motivation:
Build fails with JDK 8 because npn-boot does not work with JDK 8
Modifications:
Do not specify bootclasspath when on JDK 8
Result:
Build is green again.
Motivation:
Due to a known problem[1] of maven-compiler-plugin, our build always
compiles everything from scratch, which is waste of time.
Modifications:
Exclude package-info.java from the source list.
Result:
Much shorter build time.
[1]: https://jira.codehaus.org/browse/MCOMPILER-205
Motivation:
- example/pom.xml has quite a bit of duplication.
- We expect that we depend on npn-boot in more than one module in the
near future. (e.g. handler, codec-http, and codec-http2)
Modification:
- Deduplicate the profiles in example/pom.xml
- Move the build configuration related with npn-boot to the parent pom.
- Add run-example.sh that helps a user launch an example easily
Result:
- Cleaner build files
- Easier to add a new example
- Easier to launch an example
- Easier to run the tests that relies on npn-boot in the future
Motivation:
Some users already use an SSLEngine implementation in finagle-native. It
wraps OpenSSL to get higher SSL performance. However, to take advantage
of it, finagle-native must be compiled manually, and it means we cannot
pull it in as a dependency and thus we cannot test our SslHandler
against the OpenSSL-based SSLEngine. For an instance, we had #2216.
Because the construction procedures of JDK SSLEngine and OpenSslEngine
are very different from each other, we also need to provide a universal
way to enable SSL in a Netty application.
Modifications:
- Pull netty-tcnative in as an optional dependency.
http://netty.io/wiki/forked-tomcat-native.html
- Backport NativeLibraryLoader from 4.0
- Move OpenSSL-based SSLEngine implementation into our code base.
- Copied from finagle-native; originally written by @jpinner et al.
- Overall cleanup by @trustin.
- Run all SslHandler tests with both default SSLEngine and OpenSslEngine
- Add a unified API for creating an SSL context
- SslContext allows you to create a new SSLEngine or a new SslHandler
with your PKCS#8 key and X.509 certificate chain.
- Add JdkSslContext and its subclasses
- Add OpenSslServerContext
- Add ApplicationProtocolSelector to ensure the future support for NPN
(NextProtoNego) and ALPN (Application Layer Protocol Negotiation) on
the client-side.
- Add SimpleTrustManagerFactory to help a user write a
TrustManagerFactory easily, which should be useful for those who need
to write an alternative verification mechanism. For example, we can
use it to implement an unsafe TrustManagerFactory that accepts
self-signed certificates for testing purposes.
- Add InsecureTrustManagerFactory and FingerprintTrustManager for quick
and dirty testing
- Add SelfSignedCertificate class which generates a self-signed X.509
certificate very easily.
- Update all our examples to use SslContext.newClient/ServerContext()
- SslHandler now logs the chosen cipher suite when handshake is
finished.
Result:
- Cleaner unified API for configuring an SSL client and an SSL server
regardless of its internal implementation.
- When native libraries are available, OpenSSL-based SSLEngine
implementation is selected automatically to take advantage of its
performance benefit.
- Examples take advantage of this modification and thus are cleaner.
Motivation:
It should be frictionless to import our project into Eclipse
Modifications:
Exclude the plugins with missing life cycle mapping. They are not useful
for use with IDE anyway.
Result:
Fixes#2488
Netty is imported into Eclipse without a problem.
Motivation:
oss.sonatype.org refuses to promote an artifact if it doesn't have the
default JAR (the JAR without classifier.)
Modifications:
- Generate both the default JAR and the native JAR to make
oss.sonatype.org happy
- Rename the profile 'release' to 'restricted-release' which reflects
what it really does better
- Remove the redundant <quickbuild>true</quickbuild> in all/pom.xml
We specify the profile 'full' that triggers that property already
in maven-release-plugin configuration.
Result:
oss.sonatype.org is happy. Simpler pom.xml
Motivation:
Netty must be released from RHEL 6.5 x86_64 or compatible so that:
1) we ship x86_64 version of epoll transport officially, and
2) we ensure the ABI compatibility with older GLIBC versions.
The shared library built on a distribution with newer GLIBC will not
run on older distributions.
Modifications:
- When 'release' profile is active, perform an additional check using
maven-enforcer-plugin so that 'mvn release:*' fails when running on
non-RHEL6.5. This rule is active only when releasing, so a user
should not be affected.
- Simplify maven-release-plugin configuration by removing redundant
profiles such as 'linux'. 'linux' is automatically activated when
releasing because we now enforce the release occurs on linux-x86_64.
- Remove the no-osgi profile, which is unused
- Remove the reference to 'sonatype-oss-release' profile in all/pom.xml,
because we always specify 'release' profile when releasing
- Rename the profile 'linux-native' to 'linux' for brevity
- Upgrade oss-parent and maven-enforcer-plugin
Result:
No one can make a mistake to release Netty on an environment that can
produce incompatible or missing native library.
Motivation:
So far, we used a very simple platform string such as linux64 and
linux32. However, this is far from perfection because it does not
include anything about the CPU architecture.
Also, the current build tries to put multiple versions of .so files into
a single JAR. This doesn't work very well when we have to ship for many
different platforms. Think about shipping .so/.dynlib files for both
Linux and Mac OS X.
Modification:
- Use os-maven-plugin as an extension to determine the current OS and
CPU architecture reliable at build time
- Use Maven classifier instead of trying to put all shared libraries
into a single JAR
- NativeLibraryLoader does not guess the OS and bit mode anymore and it
always looks for the same location regardless of platform, because the
Maven classifier does the job instead.
Result:
Better scalable native library deployment and retrieval
Motivation:
If sun.nio.ch is not optional this will cause troubles in the
OSGi world. The package is not exposed by default in OSGi, so
actually the whole netty framework cannot be used directly.
There are workarounds, but workarounds are ugly. Especially since
the use of sun.nio.ch is optional. So the requirement on the
package should be optional as well.
Modifications:
Make the import of sun.nio.ch optional.
Result:
If the package cannot be imported it will behave as if the package
sun.nio.ch is not present (like with other JVMs). If the package is
exposed in OSGi (e.g. bootclassloader delegation, extension fragment)
it will be used.
Motivation:
While investigating the recent CI machine crashes, I observed that the
JVM processes spawned by surefire sometimes take up to 1 GiB RAM.
Consuming large amount of memory isn't really a problem, but we need to
make sure no GC trashing is occuring during the tests.
Modifications:
Add -verbose:gc option to the test JVM arguments
Result:
We can determine if there is any GC anomalies going on in our CI
machine.
Motivation:
Cleanup pom.xml file.
Modifications:
Remove sniffer whitelist entries for NIO.2 as we not include a NIO.2 bases transport anymore.
Result:
Less entries in pom.xml
Motivation:
At the moment we use SocketChannel.open(), ServerSocketChannel.open() and DatagramSocketChannel.open(...) within the constructor of our
NIO channels. This introduces a bottleneck if you create a lot of connections as these calls delegate to SelectorProvider.provider() which
uses synchronized internal. This change removed the bottleneck.
Modifications:
Obtain a static instance of the SelectorProvider and use SelectorProvider.openSocketChannel(), SelectorProvider.openServerSocketChannel() and
SelectorProvider.openDatagramChannel(). This eliminates the bottleneck as SelectorProvider.provider() is not called on every channel creation.
Result:
Less conditions when create new channels.
This transport use JNI (C) to directly make use of epoll in Edge-Triggered mode for maximal performance on Linux. Beside this it also support using TCP_CORK and produce less GC then the NIO transport using JDK NIO.
It only builds on linux and skip the build if linux is not used. The transport produce a jar which contains all needed .so files for 32bit and 64 bit. The user only need to include the jar as dependency as usually
to make use of it and use the correct classes.
This includes also some cleanup of @trustin
This changeset implements the full memcache binary protocol spec, including
a first batch of tests. Ascii protocol and more coverage and helper classes
will follow.
- Move the version number to the parent pom's pluginManagement section
- Remove unnecessary system properties
- Increase the scope of execution from compile to runtime