Motivation:
Not knowing which unit is returned by the maxContentLength() of the Messageggregator when reading the Javadoc is annoying and can be a source of bugs.
Modifications:
Added the mention "in bytes"
Result:
Javadoc is clear.
Motivation:
Not knowing which unit is used for the maxContentLength of the HttpObjectAggregator when reading the Javadoc is annoying and can be a source of bugs.
Modifications:
Added the mention "in bytes"
Result:
Javadoc is clear.
Related: #3464
Motivation:
When a connection attempt is failed,
NioSocketChannelUnsafe.closeExecutor() triggers a SocketException,
suppressing the channelUnregistered() event.
Modification:
Do not attempt to get SO_LINGER value when a socket is not open yet.
Result:
One less bug
Motivation:
Due a a regression that was introduced by b898bdd we failed to set the localAddress if the connect did not success directly.
Modifications:
Correct set localAddress in doConnect(...)
Result:
Be able to get the localAddress in all cases.
Related: #3445
Motivation:
HttpObjectDecoder.HeaderParser does not reset its counter (the size
field) when it failed to find the end of line. If a header is split
into multiple fragments, the counter is increased as many times as the
number of fragments, resulting an unexpected TooLongFrameException.
Modifications:
- Add test cases that reproduces the problem
- Reset the HeaderParser.size field when no EOL is found.
Result:
One less bug
Motivation:
`Unpooled` javadoc's mentioned the generation of hex dump and swapping an integer's byte order,
which are actually provided by `ByteBufUtil`.
Modifications:
Sentence moved to `ByteBufUtil` javadoc.
Result:
`Unpooled` javadoc is correct.
Motivation:
During 6b941e9bdb I introduced a regression that could cause an IllegalStateException.
A non-proper fix was commited as part of #3443. This commit add a proper fix.
Modifications:
Remove FileDescriptor.INVALID and add FileDescriptor.isOpen() as replacement. Once FileDescriptor.close() is called isOpen() will return false.
Result:
No more IllegalStateException caused by a close channel.
Motivation:
EpollDragramChannel never calls fireChannelActive after connect() which is a bug.
Modifications:
Correctly call fireChannelActive if needed
Result:
Correct behaviour
Motivation:
Before struct's were passed per value and not pointer. This did enforce a memory copy which is not needed.
Modifications:
- Use "const struct....*" as replacement
Result:
No more unnecessary memory copies
Motivation:
When create address from filedescriptor we may use incorrect byte order and so end up with an incorrect InetAddress.
Modification:
Not manually shift bytes
Result:
Correct address in all cases.
Motivation:
Because of a regression sometimes accept could produce an IllegalArgumentException
Modifications:
Correctly respect offset when decode port and scope id.
Result:
No more IllegalArgumentException
Motivation:
Twitter hpack has upgraded to 0.10.1 to fix a parsing bug.
Modifications:
Updated the parent pom to specify the dependency version.
Result:
HTTP/2 updated to the latest hpack release.
Motivation:
This is a regression that was introduced as part of 6b941e9bdb. The regression could produce an "infinity" triggering of IllegalStateException if a channel goes inactive while process the events for it.
Modifications:
Correctly check if the channel is still active before trigger the callbacks.
Result:
No more IllegalStateException
Motivation:
There is a small race in the native transport where an accept(...) may success but a later try to obtain the remote address from the fd may fail is the fd is already closed.
Modifications:
Let accept(...) directly set the remote address.
Result:
No more race possible.
Motivation:
When epoll LT is used and autoRead == false when entering epollIn() we need to return without reading any data.
Modifications:
Correctly respect autoRead == false if using epoll LT.
Result:
Consistent and correct behaviour.
Motivation:
In the native transport we should throw a pre-instanced IOException on connection reset while reading.
Modifications:
Correctly throw pre-instanced IOException when ECONNRESET is received
Result:
Less overhead on connection reset
Motivation:
At the moment when EmbeddedChannel is used and a ChannelHandler tries to schedule and task it will throw an UnsupportedOperationException. This makes it impossible to test these handlers or even reuse them with EmbeddedChannel.
Modifications:
- Factor out reusable scheduling code into AbstractSchedulingEventExecutor
- Let EmbeddedEventLoop and SingleThreadEventExecutor extend AbstractSchedulingEventExecutor
- add EmbbededChannel.runScheduledPendingTasks() which allows to run all scheduled tasks that are ready
Result:
Embeddedchannel is now usable even with ChannelHandler that try to schedule tasks.
Motivation:
Provide non-blocking XML parser as Netty codec.
Modifications:
New codec implemented/extracted.
io.netty.handler.codec.xml.XmlDecoder decodes XML fed by Netty without blocking.
Result:
Non-blocking XML stream parsing.
Motivation:
Currently CORS can be configured to support a 'null' origin, which can
be set by a browser if a resources is loaded from the local file system.
When this is done 'Access-Control-Allow-Origin' will be set to "*" (any
origin). There is also a configuration option to allow credentials being
sent from the client (cookies, basic HTTP Authentication, client side
SSL). This is indicated by the response header
'Access-Control-Allow-Credentials' being set to true. When this is set
to true, the "*" origin is not valid as the value of
'Access-Control-Allow-Origin' and a browser will reject the request:
http://www.w3.org/TR/cors/#resource-requests
Modifications:
Updated CorsHandler's setAllowCredentials to check the origin and if it
is "*" then it will not add the 'Access-Control-Allow-Credentials'
header.
Result:
Is is possible to have a client send a 'null' origin, and at the same
time have configured the CORS to support that and to allow credentials
in that combination.
Conflicts:
codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsHandlerTest.java
Motivation:
At the moment if you want to return a HTTP header containing multiple
values you have to set/add that header once with the values wanted. If
you used set/add with an array/iterable multiple HTTP header fields will
be returned in the response.
Note, that this is indeed a suggestion and additional work and tests
should be added. This is mainly to bring up a discussion.
Modifications:
Added a flag to specify that when multiple values exist for a single
HTTP header then add them as a comma separated string.
In addition added a method to StringUtil to help escape comma separated
value charsequences.
Result:
Allows for responses to be smaller.
Conflicts:
codec-http/src/main/java/io/netty/handler/codec/http/DefaultHttpHeaders.java
codec-http/src/test/java/io/netty/handler/codec/http/cors/CorsHandlerTest.java
codec/src/main/java/io/netty/handler/codec/DefaultTextHeaders.java
codec/src/test/java/io/netty/handler/codec/DefaultTextHeadersTest.java
Motivation:
We should allow to get a ChannelOption/AttributeKey from a String. This will make it a lot easier to make use of configuration files in applications.
Modifications:
- Add exists(...), newInstance(...) method to ChannelOption and AttributeKey and alter valueOf(...) to return an existing instance for a String or create one.
- Add unit tests.
Result:
Much more flexible usage of ChannelOption and AttributeKey.
Motivation:
As we plan to have other native transports soon (like a kqueue transport) we should move unix classes/interfaces out of the epoll package so we
introduce other implementations without breaking stuff before the next stable release.
Modifications:
Create a new io.netty.channel.unix package and move stuff over there.
Result:
Possible to introduce other native impls beside epoll.
Motivation:
Release 4.0.25 was not usable in OSGi environments due to a simple typo.
An automated test could have caught the problem even before it was
committed.
Modifications:
This patch introduces a new artifact, osgitests, which pulls in all
production artifacts (which we want to be checked for OSGi compliance).
It contains only a single unit test, which runs a pax-exam container
with felix OSGi.
At initialization time, it scans all the artifact's dependencies,
looking for things belonging to io.netty group. The container is
configured to deploy those artifacts as bundles and fail if any bundle
is found to be unresolved. It performs a final check to see if any
bundles were tested this way, to make sure the mechanism is not
completely broken.
We are using wrappedBundle(), as two of our third-party dependencies do
not export packages correctly -- this masks the problem, assuming that
whoever deploys our artifacts depending on them will figure out how to
OSGify them.
Result:
Simple typos and other bundle manifest errors should be caught during
test phase of every build.
Motivation:
This will avoid one unncessary method invokation which will slightly improve performance.
Modifications:
Instead of calling isReadable we just check for the value of readableBytes()
Result:
Nothing functionally speaking change.
Motivation:
At the moment we have two problems:
- CompositeByteBuf.addComponent(...) will not add the supplied buffer to the CompositeByteBuf if its empty, which means it will not be released on CompositeByteBuf.release() call. This is a problem as a user will expect everything added will be released (the user not know we not added it).
- CompositeByteBuf.addComponents(...) will either add no buffers if none is readable and so has the same problem as addComponent(...) or directly release the ByteBuf if at least one ByteBuf is readable. Again this gives inconsistent handling and may lead to memory leaks.
Modifications:
- Always add the buffer to the CompositeByteBuf and so release it on release call.
Result:
Consistent handling and no buffer leaks.
Motivation:
HttpToHttp2ConnectionHandlerTest was accidentally modified with a
debugging value for WAIT_TIME_SECONDS.
Modifications:
Reverted the change.
Result:
original wait time restored.
While implementing netty-handler-proxy, I realized various issues in our
current socksx package. Here's the list of the modifications and their
background:
- Split message types into interfaces and default implementations
- so that a user can implement an alternative message implementations
- Use classes instead of enums when a user might want to define a new
constant
- so that a user can extend SOCKS5 protocol, such as:
- defining a new error code
- defining a new address type
- Rename the message classes
- to avoid abbreviated class names. e.g:
- Cmd -> Command
- Init -> Initial
- so that the class names align better with the protocol
specifications. e.g:
- AuthRequest -> PasswordAuthRequest
- AuthScheme -> AuthMethod
- Rename the property names of the messages
- so that the property names align better when the field names in the
protocol specifications
- Improve the decoder implementations
- Give a user more control over when a decoder has to be removed
- Use DecoderResult and DecoderResultProvider to handle decode failure
gracefully. i.e. no more Unknown* message classes
- Add SocksPortUnifinicationServerHandler since it's useful to the users
who write a SOCKS server
- Cleaned up and moved from the socksproxy example
Motivation:
Sometimes it's useful to be able to create a Epoll*Channel from an existing file descriptor. This is especially helpful if you integrade some c/jni code.
Modifications:
- Add extra constructor to Epoll*Channel implementations that take a FileDescriptor as an argument
- Make Rename EpollFileDescriptor to NativeFileDescriptor and make it public
- Also ensure we obtain the correct remote/local address when create a Channel from a FileDescriptor
Result:
It's now possible to create a FileDescriptor and instance a Epoll*Channel via it.
Motivation:
There are various places in OpenSslEngine wher we can do performance optimizations.
Modifications:
- Reduce JNI calls when possible
- Detect finished handshake as soon as possible
- Eliminate double calculations
- wrap multiple ByteBuffer if possible in a loop
Result:
Better performance
Motivation:
If SO_LINGER is used shutdownOutput() and close() syscalls will block until either all data was send or until the timeout exceed. This is a problem when we try to execute them on the EventLoop as this means the EventLoop may be blocked and so can not process any other I/O.
Modifications:
- Add AbstractUnsafe.closeExecutor() which returns null by default and use this Executor for close if not null.
- Override the closeExecutor() in NioSocketChannel and EpollSocketChannel and return GlobalEventExecutor.INSTANCE if getSoLinger() > 0
- use closeExecutor() in shutdownInput(...) in NioSocketChannel and EpollSocketChannel
Result:
No more blocking of the EventLoop if SO_LINGER is used and shutdownOutput() or close() is called.
Motivation:
Currently, using a MessageAggregator in the pipeline always results in the creation of an unpooled heap CompositeByteBuf. By using the ByteBufAllocator the CompositeByteBuf will use the implementation specified by the ByteBufAllocator.
Modifications:
Use the ChannelHandlerContext's ByteBufAllocator to create the CompositeByteBuf for message aggregation
Result:
The CompositeByteBuf is now configured based on the ByteBufAllocator's settings.
Motivation:
Some of the methods are frequently called and so should be inlined if possible.
Modifications:
Give the compiler a hint that we want to inline these methods.
Result:
Better performance if inlined.
Motivation:
Older linux kernels have problems handling a large value for epoll_wait(...) and so wait for ever.
Modifications:
Adjust timeout on the fly if a too big value is passed in.
Result:
Correctly works also on older kernels.
Motivation:
The writeSpinCount was ignored in the epoll transport and it just kept on trying writing. This could cause unnessary cpu spinning if a slow remote peer was reading the data very very slow.
Modification:
- Correctly take writeSpinCount into account when writing.
Result:
Less cpu spinning when writing to a slow remote peer.
Motivation:
Fix regression introduced by 585ce1593f, which missed to set EPOLLRDHUP for all stream channels.
Modifications:
Correctly set EPOLLRDHUP for all stream channels in the AbstractEpollStreamChannel constructor.
Result:
No more test failures in EpollDomain*Channel tests.
Motivation:
Before we used a long[] to store the ready events, this had a few problems and limitations:
- An extra loop was needed to translate between epoll_event and our long
- JNI may need to do extra memory copy if the JVM not supports pinning
- More branches
Modifications:
- Introduce a EpollEventArray which allows to directly write in a struct epoll_event* and pass it to epoll_wait.
Result:
Better speed when using native transport, as shown in the benchmark.
Before:
[xxx@xxx wrk]$ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.56ms 8.64ms 117.15ms 80.58%
Req/Sec 286.17k 38.71k 421.48k 68.17%
546324329 requests in 2.00m, 73.78GB read
Requests/sec: 4553438.39
Transfer/sec: 629.66MB
After:
[xxx@xxx wrk]$ ./wrk -H 'Connection: keep-alive' -d 120 -c 256 -t 16 -s scripts/pipeline-many.lua http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 14.12ms 8.69ms 100.40ms 83.08%
Req/Sec 294.79k 40.23k 472.70k 66.75%
555997226 requests in 2.00m, 75.08GB read
Requests/sec: 4634343.40
Transfer/sec: 640.85MB
Motivation:
isRoot() is an expensive operation. We should avoid calling it if
possible.
Modifications:
Move the isRoot() checks to the end of the 'if' block, so that isRoot()
is evaluated only when really necessary.
Result:
isRoot() is evaluated only when SO_BROADCAST is set and the bind address
is anylocal address.
Related:
- 14d64d0966
Motivation:
The commit mentioned above introduced a regression where
channelReadComplete() event is swallowed by a handler which was added
dynamically.
Modifications:
Do not suppress channelReadComplete() if the current handler's
channelRead() method was not invoked at all, so that a just-added
handler does not suppress channelReadComplete().
Result:
Regression is gone, and channelReadComplete() is invoked when necessary.
Motivation:
Even if a handler called ctx.fireChannelReadComplete(), the next handler
should not get its channelReadComplete() invoked if fireChannelRead()
was not invoked before.
Modifications:
- Ensure channelReadComplete() is invoked only when the handler of the
current context actually produced a message, because otherwise there's
no point of triggering channelReadComplete().
i.e. channelReadComplete() must follow channelRead().
- Fix a bug where ctx.read() was not called if the handler of the
current context did not produce any message, making the connection
stall. Read the new comment for more information.
Result:
- channelReadComplete() is invoked only when it makes sense.
- No stale connection