Motivation:
Currently our epoll native transport requires sun.misc.Unsafe and so we need to take this into account for Epoll.isAvailable().
Modifications:
Take into account if sun.misc.Unsafe is present.
Result:
Only return true for Epoll.isAvailable() if sun.misc.Unsafe is present.
Motivation:
We had reports of failures before when sun.misc.Unsafe was not present. We should run our tests also with it disable to ensure everything works even if sun.misc.Unsafe is not present on the system.
Modifications:
Add a new profile which allows to run tests without Unsafe (using -PnoUnsafe)
Result:
Better testing of netty for systems where sun.misc.Unsafe is not present.
Motivation:
We not correctly added newlines if the src data needed to be padded. This regression was introduced by '63426fc3ed083513c07a58b45381f5c10dd47061'
Modifications:
- Correctly handling newlines
- Add unit test that proves the fix.
Result:
No more invalid base64 encoded data.
Motivation:
Builds fail with java 1.8.0_72 because jetty-alpn-boot has absorbed new code from openjdk and older version are now incompatible.
Modifications:
- Updated jetty-alpn-agent version
Result:
We can now build/develop using java 1.8.0_72
Motivation:
Sometimes it's easier to get keys/certificates as `InputStream`s than it is to
get an actual `File`. This is especially true when operating in a container
environment and `getResourceAsInputStream` is the best way to load resources
packaged with an application.
Modifications:
- Add read-from-`InputStream` methods to `PemReader`
- Allow `SslContext` to get keys/certificates from `InputStreams`
- Add `InputStream`-based setters for key/trust managers to `SslContextBuilder`
Result:
Callers may pass an `InputStream` instead of a `File` to `SslContextBuilder`.
Motivation:
The implementation of obtaining the best possible mac address is very good. There are many sub-par implementations proposed on stackoverflow.
While not strictly a netty concern, it would be nice to offer this util also to netty users.
Modifications:
extract DefaultChannelId#defaultMachineId code obtaining the "best" mac into a new helper called MacAddress, keep the random bytes fallback in DefaultChannelID.
Result:
New helper available.
Motivation:
OpenSslContext constructor fails with a UnsupportedOperationException if Unsafe is not present on the system.
Modifications:
Make OpenSslContext work also when Unsafe is not present by fallback to using JNI to get the memory address.
Result:
Using OpenSslContext also works on systems without Unsafe.
Motivation:
We missed to take the byte[] into account when try to access the bytes and so produce a segfault.
Modifications:
Correctly pass the byte[] in.
Result:
No more segfault.
Motivation:
When a channel was registered before and is re-registered we need to respect ChannelConfig.isAutoRead() and so start reading one the registration task completes. This was done "by luck" before 15162202fb.
Modifications:
Explicit start reading once a Channel was re-registered if isAutoRead() is true.
Result:
Correctly receive data after re-registration completes.
Motivation:
Due a regression introduced by e969b6917c we missed to pass the original ChannelPromise to the next ChannelOutboundHandler and so
may never notify the origin ChannelPromise. This is related to #4805.
Modifications:
- Correctly pass the ChannelPromise
- Add unit test.
Result:
Correctly pass the ChannelPromise on deregister(...)
Motivation:
If the Connection header contains multiple values (which is valid) we fail to detect a websocket upgrade
Modification:
- Add new method which allows to check if a header field contains a specific value (and also respect multiple header values)
- Use this method to detect handshake
Result:
Correct detect handshake if Connection header contains multiple values (seperated by ',').
Motivation:
If the ZlibCodecFactory can support using a custom window size we should support it by default in the websocket extensions as well.
Modifications:
Detect if a custom window size can be handled by the ZlibCodecFactory and if so enable it by default for PerMessageDeflate*ExtensionHandshaker.
Result:
Support window size flag by default in most installations.
Motivation:
If the user calls handshake.finishHandshake() we need to ensure that the user has the chance to setup the pipeline before any WebSocketFrames are read. Because of this we need
to delay the removal of the HttpRequestDecoder.
Modifications:
- Remove the HttpRequestDecoder via the EventLoop and so delay it which gives the user a chance to setup the pipeline after finishHandshake() completes
- Add unit test for this.
Result:
Less surpising and correct behaviour even if the http response and websocket frame are received in one read operation.
Motivation:
When an InetNameResolver resolves a name, it is expected to reserve the
requested host name in the resolved InetAddress.
DefaultHostsFileEntriesResolver does not preserve the host name. For
example, resolving 'localhost' will return an InetAddress whose address
is '127.0.0.1', but its getHostString() will not return 'localhost' but
just '127.0.0.1'.
Modifications:
Fix the construction of parsed InetAddresses in HostsFileParser
Result:
Host name is preserved in the resolved InetAddress
Motivation:
If Netty's class files are renamed and the type references are updated (shaded) the native libraries will not function. The native epoll module uses implicit JNI bindings which requires the fully qualified java type names to match the method signatures of the native methods. This means EPOLL cannot be used with a shaded Netty.
Modifications:
- Make the JNI method registration dynamic
- support a system property io.netty.packagePrefix which must be prepended to the name of the native library (to ensure the correct library is loaded) and all class names (to allow classes to be correctly referenced)
- remove system property io.netty.native.epoll.nettyPackagePrefix which was recently added and the code to support it was incomplete
Result:
transport-native-epoll can be used when Netty has been shaded.
Fixes https://github.com/netty/netty/issues/4800
Motivation:
We need to enable SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER when using OpenSslContext as the memory address of the buffer that is passed to OpenSslEngine.wrap(...) may change during calls and retries. This is the case as
if the buffer is a heap-buffer we will need to copy it to a direct buffer to hand it over to the JNI layer. When not enable this mode we may see errors like: 'error:1409F07F:SSL routines:SSL3_WRITE_PENDING: bad write retry'.
Related to https://github.com/netty/netty-tcnative/issues/100.
Modifications:
Explitict set mode to SSL.SSL_MODE_RELEASE_BUFFERS | SSL.SSL_MODE_ACCEPT_MOVING_WRITE_BUFFER . (SSL.SSL_MODE_RELEASE_BUFFERS was used before implicitly).
Result:
No more 'error:1409F07F:SSL routines:SSL3_WRITE_PENDING: bad write retry' possible when writing heap buffers.
Motivation:
When using SslProvider.OPENSSL we currently not handle SNI on the client side.
Modifications:
Correctly enable SNI when using clientMode and peerHost != null.
Result:
SNI works even with SslProvider.OPENSSL.
Motivation:
fix the issue netty#2944
Modifications:
use - instead of =>, use ! instead of :> due to the connection is bidirectional. What's more, toString() method don't know the direction or there is no need to know the direction when only log channel information.
add L: before local address and R: before remote address.
Result:
after the fix, log won't confuse the user
Motivation:
The current interface for CompositeByteBuf.addComponent is not clear under what conditions ownership is transferred when addComponent is called. There should be a well defined behavior so that users can ensure that no leaks occur.
Modifications:
- CompositeByteBuf.addComponent should always assume reference count ownership
Result:
Users that call CompositeByteBuf.addComponent do not have to independently check if the buffer's ownership has been transferred and if not independently release the buffer.
Fixes https://github.com/netty/netty/issues/4760
Motivation:
Http2CodecUtil uses ByteBufUtil.writeUtf8 but does not account for it
throwing an exception. If an exception is thrown because the format is
not valid UTF16 encoded UTF8 then the buffer will leak.
Modifications:
- Make sure the buffer is released if an exception is thrown
- Ensure call sites of the Http2CodecUtil.toByteBuf can tolerate and
exception being thrown
Result:
No leak if exception data can not be converted to UTF8.
Motivation:
The AbstractChannel(Channel parent) constructor was previously hard-coded to always
call DefaultChannelId.newInstance(), and this made it difficult to use a custom
ChannelId implementation with some commonly used Channel implementations.
Modifications:
Introduced newId() method in AbstractChannel, which by default returns
DefaultChannelId.newInstance() but can be overridden by subclasses. Added
ensureDefaultChannelId() test to AbstractChannelTest, to ensure the prior
behavior of calling DefaultChannelId.newInstance() still holds with the
AbstractChannel(Channel parent) constructor.
Result:
AbstractChannel now has the protected newId() method, but there is no functional
difference.
Motivation:
Request bodies can easily be larger than Integer.MAX_VALUE in practice.
There's no reason, or intention, for Netty to impose this artificial constraint.
Worse, it currently does not fail if the body is larger than this value;
it just silently only reads the first Integer.MAX_VALUE bytes and discards the rest.
This restriction doesn't effect chunked transfers, with no Content-Length header.
Modifications:
Force the use of `long HttpUtil.getContentLength(HttpMessage, long)` instead of
`long HttpUtil.getContentLength(HttpMessage, long)`.
Result:
Netty will support HTTP request bodies of up to Long.MAX_VALUE length.
Motivation:
Currently the initial headers for every stream is queued in the flow controller. Since the initial header frame may create streams the peer must receive these frames in the order in which they were created, or else this will be a protocol error and the connection will be closed. Tolerating the initial headers being queued would increase the complexity of the WeightedFairQueueByteDistributor and there is benefit of doing so is not clear.
Modifications:
- The initial headers will no longer be queued in the flow controllers
Result:
Fixes https://github.com/netty/netty/issues/4758
Motivation:
In HttpConversionUtil's toHttpRequest and toHttpResponse methods can
allocate FullHttpMessage objects, and if an exeception is thrown during
the header conversion then this object will not be released. If a
FullHttpMessage is not fired up the pipeline, and the stream is closed
then we remove from the map, but do not release the object. This leads
to a ByteBuf leak. Some of the logic related to stream lifetime management
and FullHttpMessage also predates the RFC being finalized and is not correct.
Modifications:
- Fix leaks in HttpConversionUtil
- Ensure the objects are released when they are removed from the map.
- Correct logic and unit tests where they are found to be incorrect.
Result:
Fixes https://github.com/netty/netty/issues/4780
Fixes https://github.com/netty/netty/issues/3619
Motivation:
When HttpClientUpgradeHandler upgrades from HTTP/1 to another protocol,
it performs a two-step opertion:
1. Remove the SourceCodec (HttpClientCodec)
2. Add the UpgradeCodec
When HttpClientCodec is removed from the pipeline, the decoder being
removed triggers channelRead() event with the data left in its
cumulation buffer. However, this is not received by the UpgradeCodec
becuase it's not added yet. e.g. HTTP/2 SETTINGS frame sent by the
server can be missed out.
To fix the problem, we need to reverse the steps:
1. Add the UpgradeCodec
2. Remove the SourceCodec
However, this does not work as expected either, because UpgradeCodec can
send a greeting message such as HTTP/2 Preface. Such a greeting message
will be handled by the SourceCodec and will trigger an 'unsupported
message type' exception.
To fix the problem really, we need to make the upgrade process 3-step:
1. Remove/disable the encoder of SourceCodec
2. Add the UpgradeCodec
3. Remove the SourceCodec
Modifications:
- Add SourceCodec.prepareUpgradeFrom() so that SourceCodec can remove or
disable its encoder
- Implement HttpClientCodec.prepareUpgradeFrom() properly
- Miscellaneous:
- Log the related channel as well When logging the failure to send a
GOAWAY
Result:
Cleartext HTTP/1-to-HTTP/2 upgrade works again.
Motivation:
Some duplicated methods in message types of codec-memcache can be cleaned using AbstractReferenceCounted.
Modifications:
Use AbstractReferenceCounted to avoid duplicated methods.
Result:
Duplicated methods are cleaned.
Motivation:
See #3411. A reusable ArrayList in InternalThreadLocalMap can avoid allocations in the following pattern:
```
List<...> list = new ArrayList<...>();
add something to list but never use InternalThreadLocalMap
return list.toArray(new ...[list.size()]);
```
Modifications:
Add a reusable ArrayList to InternalThreadLocalMap and update codes to use it.
Result:
Reuse a thread local ArrayList to avoid allocations.
Motivation:
As we now can easily build static linked versions of tcnative it makes sense to run our netty build against all of them.
This helps to ensure our code works with libressl, openssl and boringssl.
Modifications:
Allow to specify -Dtcnative.artifactId= and -Dtcnative.version=
Result:
Easy to run netty build against different tcnative flavors.
Motivation:
BinaryMemcacheObjectAggregator doesn't retain ByteBuf `extras`. So `io.netty.util.IllegalReferenceCountException: refCnt: 0, decrement: 1` will be thrown when aggregating a message containing `extras`. See the unit test for an example.
Modifications:
`ratain` extras to fix IllegalReferenceCountException.
Result:
`extras` is retained.
Motivation:
StreamBufferingEncoder provides queueing so that MAX_CONCURRENT_STREAMS is not violated. However the stream id generation provided by Http2Connection.nextStreamId() only returns the next stream id that is expected on the connection and does not account for queueing. The codec should provide a way to generate the next stream id for a given endpoint that functions with or without queueing.
Modifications:
- Change Http2Connection.nextStreamId to Http2Connection.incrementAndGetNextStreamId
Result:
Http2Connection can generate the next stream id in queued and non-queued scenarios.
Fixes https://github.com/netty/netty/issues/4704
Motivation:
A few implementations of OioServerChannel have a default max messages per read set to 16. We should set the default to 1 to prevent blocking on a read before setting a socket that has just been accepted.
Modifications:
- OioSctpServerChannel and OioServerSocketChannel metadata changed to use the default (1) max messages per read
Result:
Oio based servers will complete accepting a socket before potentially blocking waiting to accept other sockets.
Motivation:
AbstractBinaryMemcacheDecoder.currentMessage is not retained after sending it out. Hence, if a message contains `extras`, `io.netty.util.IllegalReferenceCountException` will be thrown in `channelInactive`.
Modifications:
Retain AbstractBinaryMemcacheDecoder.currentMessage After putting it to `out` and release it when it's not used.
Result:
No IllegalReferenceCountException or leak.
Motivation:
If validateHeaders is set in combination with the encoder/decoder it will be silently ignored. We should enforce the constraint that validateHeaders and encoder/decoder are mutually exclusive.
Modifications:
- Make sure either validateHeaders can be set or encoder/decoder.
Result:
AbstractHttp2ConnectionHandlerBuilder does not allow conflicting options to be set.
Motivation:
When an SSL record contains an invalid extension data, SniHandler
currently throws an IndexOutOfBoundsException, which is not optimal.
Modifications:
- Do strict index range checks
Result:
No more unnecessary instantiation of exceptions and their stack traces
Motivation:
When a wildcard address is used to bind a socket and ipv4 and ipv6 are usable we should accept both (just like JDK IO/NIO does).
Modifications:
Detect wildcard address and if so use in6addr_any
Result:
Correctly accept ipv4 and ipv6
Motivation:
Not all SSLEngine implementations permit beginHandshake being called while a handshake is in progress during the initial handshake. We should ensure we only go through the initial handshake code once to prevent unexpected exceptions from being thrown.
Modifications:
- Only call beginHandshake if there is not currently a handshake in progress
Result:
SslHandler's handshake method is compatible with OpenSSLEngineImpl in Android 5.0+ and 6.0+.
Fixes https://github.com/netty/netty/issues/4718
Motivation:
If a user adds a ChannelHandler from outside the EventLoop it is possible to get into the situation that handlerAdded(...) is scheduled on the EventLoop and so called after another methods of the ChannelHandler as the EventLoop may already be executing on this point in time.
Modification:
- Ensure we always check if the handlerAdded(...) method was called already and if not add the currently needed call to the EventLoop so it will be picked up after handlerAdded(...) was called. This works as if the handler is added to the ChannelPipeline from outside the EventLoop the actual handlerAdded(...) operation is scheduled on the EventLoop.
- Some cleanup in the DefaultChannelPipeline
Result:
Correctly order of method executions of ChannelHandler.
Motivation:
WebSocketClientCompressionHandler is stateless so it should be @Sharable.
Modifications:
Add @Sharable annotation to WebSocketClientCompressionHandler, make constructor private and add static field to get the instance.
Result:
Less object creation.
Motivation:
We incorrectly added the trustCertChain as certificate chain when OpenSslClientContext was created. We need to correctly add the keyCertChain.
Modifications:
Correctly add whole keyCertChain.
Result:
SSL client auth is working when usin OpenSslClientContext and more then one cert is contained in the certificate chain.
Motivation:
DefaultHttp2RemoteFlowController does not correctly account for the padding in the event frames are merged. This causes the internal stat of DefaultHttp2RemoteFlowController to become corrupt and can result in attempting to write frames when there are none.
Modifications:
- Update DefaultHttp2RemoteFlowController to account for frame sizes not necessarily adding together.
Result:
DefaultHttp2RemoteFlowController internal state does not become corrupt when padding is present.
Fixes https://github.com/netty/netty/issues/4573
Motivation:
PlatformDependent0 has an optimization which grabs the char[] from a String. Since this code was introduced http://openjdk.java.net/jeps/254 has been gaining momentum in JDK 9. This JEP changes the internal storage from char[] to byte[], and thus the existing char[] only based optimizations will not work.
Modifications:
- The ASCII encoding char[] String optimizations should also work for byte[].
Result:
ASCII encoding char[] String optimizations don't break if the underlying storage in String is byte[].
Motivation:
I am use netty as a http server, it fail to decode some POST request when the request absent Content-Type in the multipart/form-data body.
Modifications:
Set content_type with default application/octet-stream to parse the uploaded file data when the Content-Type is absent in multipart request body
Result:
Can decode the http request as normal.
Motivation
----------
Currently, only the fixed 24 bytes are allocated for the header and
then all the params as well as the optional extras and key are written
into the header section.
It is very likely that the buffer needs to expand at least two times
if either the extras and/or the key take up more space.
Modifications
-------------
Since at the point of allocation we know the key and extras length,
the buffer can be preallocated with the exact size, avoiding unnecessary
resizing and even allocating too much (since it uses power of two
internally).
Result
------
Less buffer resizing needed when encoding a memcache operation.