Motivation:
Some users already use an SSLEngine implementation in finagle-native. It
wraps OpenSSL to get higher SSL performance. However, to take advantage
of it, finagle-native must be compiled manually, and it means we cannot
pull it in as a dependency and thus we cannot test our SslHandler
against the OpenSSL-based SSLEngine. For an instance, we had #2216.
Because the construction procedures of JDK SSLEngine and OpenSslEngine
are very different from each other, we also need to provide a universal
way to enable SSL in a Netty application.
Modifications:
- Pull netty-tcnative in as an optional dependency.
http://netty.io/wiki/forked-tomcat-native.html
- Backport NativeLibraryLoader from 4.0
- Move OpenSSL-based SSLEngine implementation into our code base.
- Copied from finagle-native; originally written by @jpinner et al.
- Overall cleanup by @trustin.
- Run all SslHandler tests with both default SSLEngine and OpenSslEngine
- Add a unified API for creating an SSL context
- SslContext allows you to create a new SSLEngine or a new SslHandler
with your PKCS#8 key and X.509 certificate chain.
- Add JdkSslContext and its subclasses
- Add OpenSslServerContext
- Add ApplicationProtocolSelector to ensure the future support for NPN
(NextProtoNego) and ALPN (Application Layer Protocol Negotiation) on
the client-side.
- Add SimpleTrustManagerFactory to help a user write a
TrustManagerFactory easily, which should be useful for those who need
to write an alternative verification mechanism. For example, we can
use it to implement an unsafe TrustManagerFactory that accepts
self-signed certificates for testing purposes.
- Add InsecureTrustManagerFactory and FingerprintTrustManager for quick
and dirty testing
- Add SelfSignedCertificate class which generates a self-signed X.509
certificate very easily.
- Update all our examples to use SslContext.newClient/ServerContext()
- SslHandler now logs the chosen cipher suite when handshake is
finished.
Result:
- Cleaner unified API for configuring an SSL client and an SSL server
regardless of its internal implementation.
- When native libraries are available, OpenSSL-based SSLEngine
implementation is selected automatically to take advantage of its
performance benefit.
- Examples take advantage of this modification and thus are cleaner.
Motivation:
It should be frictionless to import our project into Eclipse
Modifications:
Exclude the plugins with missing life cycle mapping. They are not useful
for use with IDE anyway.
Result:
Fixes#2488
Netty is imported into Eclipse without a problem.
Motivation:
oss.sonatype.org refuses to promote an artifact if it doesn't have the
default JAR (the JAR without classifier.)
Modifications:
- Generate both the default JAR and the native JAR to make
oss.sonatype.org happy
- Rename the profile 'release' to 'restricted-release' which reflects
what it really does better
- Remove the redundant <quickbuild>true</quickbuild> in all/pom.xml
We specify the profile 'full' that triggers that property already
in maven-release-plugin configuration.
Result:
oss.sonatype.org is happy. Simpler pom.xml
Motivation:
Netty must be released from RHEL 6.5 x86_64 or compatible so that:
1) we ship x86_64 version of epoll transport officially, and
2) we ensure the ABI compatibility with older GLIBC versions.
The shared library built on a distribution with newer GLIBC will not
run on older distributions.
Modifications:
- When 'release' profile is active, perform an additional check using
maven-enforcer-plugin so that 'mvn release:*' fails when running on
non-RHEL6.5. This rule is active only when releasing, so a user
should not be affected.
- Simplify maven-release-plugin configuration by removing redundant
profiles such as 'linux'. 'linux' is automatically activated when
releasing because we now enforce the release occurs on linux-x86_64.
- Remove the no-osgi profile, which is unused
- Remove the reference to 'sonatype-oss-release' profile in all/pom.xml,
because we always specify 'release' profile when releasing
- Rename the profile 'linux-native' to 'linux' for brevity
- Upgrade oss-parent and maven-enforcer-plugin
Result:
No one can make a mistake to release Netty on an environment that can
produce incompatible or missing native library.
Motivation:
So far, we used a very simple platform string such as linux64 and
linux32. However, this is far from perfection because it does not
include anything about the CPU architecture.
Also, the current build tries to put multiple versions of .so files into
a single JAR. This doesn't work very well when we have to ship for many
different platforms. Think about shipping .so/.dynlib files for both
Linux and Mac OS X.
Modification:
- Use os-maven-plugin as an extension to determine the current OS and
CPU architecture reliable at build time
- Use Maven classifier instead of trying to put all shared libraries
into a single JAR
- NativeLibraryLoader does not guess the OS and bit mode anymore and it
always looks for the same location regardless of platform, because the
Maven classifier does the job instead.
Result:
Better scalable native library deployment and retrieval
Motivation:
If sun.nio.ch is not optional this will cause troubles in the
OSGi world. The package is not exposed by default in OSGi, so
actually the whole netty framework cannot be used directly.
There are workarounds, but workarounds are ugly. Especially since
the use of sun.nio.ch is optional. So the requirement on the
package should be optional as well.
Modifications:
Make the import of sun.nio.ch optional.
Result:
If the package cannot be imported it will behave as if the package
sun.nio.ch is not present (like with other JVMs). If the package is
exposed in OSGi (e.g. bootclassloader delegation, extension fragment)
it will be used.
Motivation:
While investigating the recent CI machine crashes, I observed that the
JVM processes spawned by surefire sometimes take up to 1 GiB RAM.
Consuming large amount of memory isn't really a problem, but we need to
make sure no GC trashing is occuring during the tests.
Modifications:
Add -verbose:gc option to the test JVM arguments
Result:
We can determine if there is any GC anomalies going on in our CI
machine.
Motivation:
Cleanup pom.xml file.
Modifications:
Remove sniffer whitelist entries for NIO.2 as we not include a NIO.2 bases transport anymore.
Result:
Less entries in pom.xml
Motivation:
At the moment we use SocketChannel.open(), ServerSocketChannel.open() and DatagramSocketChannel.open(...) within the constructor of our
NIO channels. This introduces a bottleneck if you create a lot of connections as these calls delegate to SelectorProvider.provider() which
uses synchronized internal. This change removed the bottleneck.
Modifications:
Obtain a static instance of the SelectorProvider and use SelectorProvider.openSocketChannel(), SelectorProvider.openServerSocketChannel() and
SelectorProvider.openDatagramChannel(). This eliminates the bottleneck as SelectorProvider.provider() is not called on every channel creation.
Result:
Less conditions when create new channels.
This transport use JNI (C) to directly make use of epoll in Edge-Triggered mode for maximal performance on Linux. Beside this it also support using TCP_CORK and produce less GC then the NIO transport using JDK NIO.
It only builds on linux and skip the build if linux is not used. The transport produce a jar which contains all needed .so files for 32bit and 64 bit. The user only need to include the jar as dependency as usually
to make use of it and use the correct classes.
This includes also some cleanup of @trustin
This changeset implements the full memcache binary protocol spec, including
a first batch of tests. Ascii protocol and more coverage and helper classes
will follow.
- Move the version number to the parent pom's pluginManagement section
- Remove unnecessary system properties
- Increase the scope of execution from compile to runtime
- Remove the reference to ResourceLeak from the buffer implementations
and use wrappers instead:
- SimpleLeakAwareByteBuf and AdvancedLeakAwareByteBuf
- It is now allocator's responsibility to create a leak-aware buffer.
- Added AbstractByteBufAllocator.toLeakAwareBuffer() for easier
implementation
- Add WrappedByteBuf to reduce duplication between *LeakAwareByteBuf and
UnreleasableByteBuf
- Raise the level of leak reports to ERROR - because it will break the
app eventually
- Replace enabled/disabled property with the leak detection level
- Only print stack trace when level is ADVANCED or above to avoid user
confusion
- Add the 'leak' build profile, which enables highly detailed leak
reporting during the build
- Remove ResourceLeakException which is unsed anymore
- Fixes#2003 properly
- Instead of using 'bundle' packaging, use 'jar' packaging. This is
more robust because some strict build tools fail to retrieve the
artifacts from a Maven repository unless their packaging is not 'jar'.
- All artifacts now contain META-INF/io.netty.version.properties, which
provides the detailed information about the build and repository.
- Removed OSGi testsuite temporarily because it gives false errors
during split package test and examination.
- Add io.netty.util.Version for easy retrieval of version information
The API changes made so far turned out to increase the memory footprint
and consumption while our intention was actually decreasing them.
Memory consumption issue:
When there are many connections which does not exchange data frequently,
the old Netty 4 API spent a lot more memory than 3 because it always
allocates per-handler buffer for each connection unless otherwise
explicitly stated by a user. In a usual real world load, a client
doesn't always send requests without pausing, so the idea of having a
buffer whose life cycle if bound to the life cycle of a connection
didn't work as expected.
Memory footprint issue:
The old Netty 4 API decreased overall memory footprint by a great deal
in many cases. It was mainly because the old Netty 4 API did not
allocate a new buffer and event object for each read. Instead, it
created a new buffer for each handler in a pipeline. This works pretty
well as long as the number of handlers in a pipeline is only a few.
However, for a highly modular application with many handlers which
handles connections which lasts for relatively short period, it actually
makes the memory footprint issue much worse.
Changes:
All in all, this is about retaining all the good changes we made in 4 so
far such as better thread model and going back to the way how we dealt
with message events in 3.
To fix the memory consumption/footprint issue mentioned above, we made a
hard decision to break the backward compatibility again with the
following changes:
- Remove MessageBuf
- Merge Buf into ByteBuf
- Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler
- Similar changes were made to the adapter classes
- Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler
- Similar changes were made to the adapter classes
- Introduce MessageList which is similar to `MessageEvent` in Netty 3
- Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList)
- Replace flush(ctx, promise) with write(ctx, MessageList, promise)
- Remove ByteToByteEncoder/Decoder/Codec
- Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf>
- Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel
- Add SimpleChannelInboundHandler which is sometimes more useful than
ChannelInboundHandlerAdapter
- Bring back Channel.isWritable() from Netty 3
- Add ChannelInboundHandler.channelWritabilityChanges() event
- Add RecvByteBufAllocator configuration property
- Similar to ReceiveBufferSizePredictor in Netty 3
- Some existing configuration properties such as
DatagramChannelConfig.receivePacketSize is gone now.
- Remove suspend/resumeIntermediaryDeallocation() in ByteBuf
This change would have been impossible without @normanmaurer's help. He
fixed, ported, and improved many parts of the changes.