Motivation:
Allow the user to create a NioServerSocketChannel from an existing ServerSocketChannel.
Modifications:
Add an extra constructor
Result:
Now the user is be able to create a NioServerSocketChannel from an existing ServerSocketChannel, like he can do with all the other Nio*Channel implemntations.
Motivation:
Ensure the user know the Channel must be closed to release resources like filehandles.
Modifications:
Add some extra javadoc.
Result:
More clear documentation
Motivation:
At the moment when an unresolvable InetSocketAddress is passed into the epoll transport a NPE is thrown
Modifications:
Add check in place which will throw an UnknownHostException if an InetSocketAddress could not been resolved.
Result:
Proper handling of unresolvable InetSocketAddresses
Motivation:
If the last item analyzed in a previous received HttpChunk/HttpContent was a part of an attribute's name, the read index was not set to the new right place and therefore raizing an exception in some case (since the "new" name analyzed is empty, which is not allowed so the exception).
What appears there is that the read index should be reset to the last valid position encountered whatever the case. Currently it was set when only when there is an attribute not already finished (name is ok, but content is possibly not).
Therefore the issue is that elements could be rescanned multiple times (including completed elements) and moreover some bad decoding can occur such as when in a middle of an attribute's name.
Modifications:
To fix this issue, since "firstpos" contains the last "valid" read index of the decoding (when finding a '&', '=', 'CR/LF'), we should add the setting of the read index for the following cases:
'lastchunk' encountered, therefore finishing the current buffer
any other cases than current attribute is not finished (name not found yet in particular)
So adding for this 2 cases:
undecodedChunk.readerIndex(firstpos);
Result:
Now the decoding is done once, content is added from chunk/content to chunk/content, name is decoded correctly even if in the middle of 2 chunks/contents.
A Junit test code was added: testChunkCorrect that should not raized any exception.
Motivation:
When starting with a read-only NIO buffer, wrapping it in a ByteBuf,
and then later retrieving a re-wrapped NIO buffer the limit was getting
too short.
Modifications:
Changed ReadOnlyByteBufferBuf.nioBuffer(int,int) to compute the
limit in the same manner as the internalNioBuffer method.
Result:
Round-trip conversion from NIO to ByteBuf to NIO will work reliably.
Motivation:
Remove the synchronization bottleneck in startThread() which is called by each execute(..) call from outside the EventLoop.
Modifications:
Replace the synchronized block with the use of AtomicInteger and compareAndSet loops.
Result:
Less conditions during SingleThreadEventExecutor.execute(...)
Motivation:
Cleanup pom.xml file.
Modifications:
Remove sniffer whitelist entries for NIO.2 as we not include a NIO.2 bases transport anymore.
Result:
Less entries in pom.xml
Motivation:
At the moment we use SocketChannel.open(), ServerSocketChannel.open() and DatagramSocketChannel.open(...) within the constructor of our
NIO channels. This introduces a bottleneck if you create a lot of connections as these calls delegate to SelectorProvider.provider() which
uses synchronized internal. This change removed the bottleneck.
Modifications:
Obtain a static instance of the SelectorProvider and use SelectorProvider.openSocketChannel(), SelectorProvider.openServerSocketChannel() and
SelectorProvider.openDatagramChannel(). This eliminates the bottleneck as SelectorProvider.provider() is not called on every channel creation.
Result:
Less conditions when create new channels.
Motivation:
Remove the synchronization bottleneck and so speed up things
Modifications:
Introduce a ThreadLocal cache that holds mappings between classes of ChannelHandlerAdapater implementations and the result of checking if the @Sharable annotation is present.
This way we only will need to do the real check one time and server the other calls via the cache. A ThreadLocal and WeakHashMap combo is used to implement the cache
as this way we can minimize the conditions while still be sure we not leak class instances in containers.
Result:
Less conditions during adding ChannelHandlerAdapter to the ChannelPipeline
Motivation:
- As reported recently [1], Recycler's thread-local object pool has unbounded capacity which is a potential problem.
- It accesses a hash table on each push and pop for debugging purposes. We don't really need it besides debugging Netty itself.
Modifications:
- Introduced the maxCapacity constructor parameter to Recycler. The default default maxCapacity is retrieved from the system property whose default is 256K, which should be plenty for most cases.
- Recycler.Stack.map is now created and accessed only when assertion is enabled for Recycler.
Result:
- Recycler does not grow infinitely anymore.
- If assertion is disabled, Recycler should be much faster.
[1] https://github.com/netty/netty/issues/1841
Motivation:
We don't really need to propagate an event when handling the event fails.
Modifications:
Do not use finally block in AbstractRemoteAddressFilter
Result:
AbstractRemoteaddressFilter does not forward an event in case of failure.
Motivation:
Recently merged ipfilter package has the following problems:
* AbstractIpFilterHandler could be improved to support any SocketAddress types rather than only InetSocketAddress.
* AbstractIpFilterHandler can be removed immediately after decision is made rather than keeping the outcome of the decision as an attribute.
* AbstractIpFilterHandler doesn't have a hook for the accepted addresses.
* The hook method (reject()) needs to be named in line with other handler methods (i.e. channelRejected())
* IpFilterRuleHandler should allow accepting zero rules - it's particularly useful for machine-configured setup (i.e. specifying zero rules disables ipfilter).
* IpFilterRuleType.ALLOW/DENY should be ACCEPT/REJECT for consistency.
Modifications:
* AbstractIpFilterHandler has been renamed to AbstractRemoteAddressFilter and now uses type parameter.
* Added channelAccepted() and renamed reject() to channelRejected()
* Added ChannelHandlerContext as a parameter of accept() so that accept() can add a listener to the closeFuture() of the channel. This way, UniqueIpFilter continue working even if we remove the filtering handler early.
* Various renames
* IpFilterRuleHandler -> RuleBasedIpFilter
* UniqueIpFilterHandler -> UniqueIpFilter
Result:
* Much cleaner API with more extensibility
Motivation:
CONTRIBUTING.md is useful only because it lets Github show a user the
link to it so the user can check what information we need before
submitting a bug report. However, Github does not do the same for a
pull request submission form, and thus there's no reason to keep the
information about how to submit a good pull request in CONTRIBUTING.md.
Modification:
Replace the section about issuing a pull request with the link to the
official developer guide.
Result:
CONTRIBUTING.md is easier to maintain.
Motivation:
We often receive a bug report or a pull request which do not give us
enough information. If CONTRIBUTING.md exists in the repository, Github
will display some notice in the beginning of the issue submission form,
which might increase the overall quality of the bug reports and pull
requests.
Modification:
Write CONTRIBUTING.md
Result:
Potentially higher-quality bug reports and pull requests
This changeset removes the separate message headers and merges the
field directly into the messages. This greatly simplifies the
object hierachy and also saves header allocations in the pipeline.
Merged WebSocketClient and WebSocketSslClient
Add private constructors to fix checkstyle errors.
More checkstyle madness.
made WebSocketClientRunner final
Previously ConcurrentHashMapV8 evaulated ((x | 1) == 0), an expression
that always returned false. This commit brings Netty closer to the
Java 8 implementation.
Motivation:
When an HttpResponseDecoder decodes an invalid chunk, a LastHttpContent instance is produced and the decoder enters the 'BAD_MESSAGE' state, which is not supposed to produce a message any further. However, because HttpObjectDecoder.invalidChunk() did not clear this.message out to null, decodeLast() will produce another LastHttpContent message on a certain situation.
Modification:
Do not forget to null out HttpObjectDecoder.message in invalidChunk(), and add a test case for it.
Result:
No more consecutive LastHttpContent messages produced by HttpObjectDecoder.
This changeset is related to #2182, which exposes the failure in
the http codec, but the memcache codec works very similar. In addition,
better failure handling in the decoder has been added.
This also does factor out some logic of ChannelOutboundBuffer. Mainly we not need nioBuffers() for many
transports and also not need to copy from heap to direct buffer. So this functionality was moved to
NioSocketChannelOutboundBuffer. Also introduce a EpollChannelOutboundBuffer which makes use of
memory addresses for all the writes to reduce GC pressure
Motivation:
ChunkedWriteHandler can sometimes fail to write the last chunk of a ChunkedInput due to an I/O error. Subsequently, the ChunkedInput's associated promise is marked as failure and the connection is closed. When the connection is closed, ChunkedWriteHandler attempts to clean up its message queue and to mark their promises as success or failure. However, because the promise of the ChunkedInput, which was consumed completely yet failed to be written, is already marked as failure, the attempt to mark it as success fails, leading a WARN level log.
Modification:
Use trySuccess() instead of setSuccess() so that the attempt to mark a ChunkedInput as success does not raise an exception even if the promise is already done.
Result:
Fixes#2249
Motivation:
Currently, it is impossible to give a user the full control over what to do in response to the request with 'Expect: 100-continue' header. Currently, a user have to do one of the following:
- Accept the request and respond with 100 Continue, or
- Send the reject response and close the connection.
.. which means it is impossible to send the reject response and keep the connection alive so that the client sends additional requests.
Modification:
Added a public method called 'reset()' to HttpObjectDecoder so that a user can reset the state of the decoder easily. Once called, the decoder will assume the next input will be the beginning of a new request.
HttpObjectAggregator now calls `reset()`right after calling 'handleOversizedMessage()' so that the decoder can continue to decode the subsequent request even after the request with 'Expect: 100-continue' header is rejected.
Added relevant unit tests / Minor clean-up
Result:
This commit completes the fix of #2211