Motivation:
While porting some changes from 4.0 to 4.1 and master branch I changed the default allocator from pooled to unpooled by mistake. This should be reverted. The guilty commit is 4a3ef90381.
Thanks to @blucas for spotting this.
Modifications:
Revert changes related to allocator.
Result:
Use the correct default allocator again.
Motivation:
At the moment we expand the ByteBuffer[] when we have more then 1024 ByteBuffer to write and replace the stored instance in its FastThreadLocal. This is not needed and may even harm performance on linux as IOV_MAX is 1024 and so this may cause the JVM to do an array copy.
Modifications:
Just exit the nioBuffers() method if we can not fit more ByteBuffer in the array. This way we will pick them up on the next call.
Result:
Remove uncessary array copy and simplify the code.
Motivation:
We cache the ByteBuffers in ChannelOutboundBuffer.nioBuffers() for the Entries in the ChannelOutboundBuffer to reduce some overhead. The problem is this can lead to data-corruption if an incomplete write happens and next time we try to do a non-gathering write.
To fix this we should remove the caching which does not help a lot anyway and just make the code buggy.
Modifications:
Remove the caching of ByteBuffers.
Result:
No more data-corruption.
Motivation:
Currently Traffic Shaping is using 1 timer only and could lead to
"partial" wrong bandwidth computation when "short" time occurs between
adding used bytes and when the TrafficCounter updates itself and finally
when the traffic is computed.
Indeed, the TrafficCounter is updated every x delay and it is at the
same time saved into "lastXxxxBytes" and set to 0. Therefore, when one
request the counter, it first updates the TrafficCounter with the added
used bytes. If this value is set just before the TrafficCounter is
updated, then the bandwidth computation will use the TrafficCounter with
a "0" value (this value being reset once the delay occurs). Therefore,
the traffic shaping computation is wrong in rare cases.
Secondly the traffic shapping should avoid if possible the "Timeout"
effect by not stopping reading or writing more than a maxTime, this
maxTime being less than the TimeOut limit.
Thirdly the traffic shapping in read had an issue since the readOp
was not set but should, turning in no read blocking from socket
point of view.
Modifications:
The TrafficCounter has 2 new methods that compute the time to wait
according to read or write) using in priority the currentXxxxBytes (as
before), but could used (if current is at 0) the lastXxxxxBytes, and
therefore having more chance to take into account the real traffic.
Moreover the Handler could change the default "max time to wait", which
is by default set to half of "standard" Time Out (30s:2 = 15s).
Finally we add the setAutoRead(boolean) accordingly to the situation,
as proposed in #2696 (this pull request is in error for unknown reason).
Result:
The Traffic Shaping is better take into account (no 0 value when it
shouldn't) and it tries to not block traffic more than Time Out event.
Moreover the read is really stopped from socket point of view.
This version is similar to #2388 and #2450.
This version is for Master, and includes the #2696 pull request
to ease the merge process.
Including also #2748
The test minimizes time check by reducing to 66ms steps (55s).
Conflicts:
handler/src/main/java/io/netty/handler/traffic/AbstractTrafficShapingHandler.java
Motivation:
FastLZ compression codec provides sending and receiving data encoded by fast FastLZ algorithm using block mode.
Modifications:
- Added part of `jfastlz` library which implements FastLZ algorithm. See FastLz class.
- Implemented FastLzFramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Implemented FastLzFramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added integration tests for `FastLzFramedEncoder/Decoder`.
Result:
Full FastLZ compression codec which can compress/uncompress data using FastLZ algorithm.
Motivation:
The HTTP/2 codec currently provides direct callbacks to access stream events/data. The HTTP/2 codec provides the protocol support for HTTP/2 but it does not pass messages up the context pipeline. It would be nice to have a decoder which could collect the data framed by HTTP/2 and translate this into traditional HTTP type objects. This would allow the traditional Netty context pipeline to be used to separate processing concerns (i.e. HttpContentDecompressor). It would also be good to have a layer which can translate FullHttp[Request|Response] objects into HTTP/2 frame outbound events.
Modifications:
Introduce a new InboundHttp2ToHttpAdapter and supporting classes which will translate HTTP/2 stream events/data into HttpObject objects. Introduce a new DelegatingHttp2HttpConnectionHandler which will translate FullHttp[Request|Response] objects to HTTP/2 frame events.
Result:
Introduced HTTP/2 frame events to HttpObject layer.
Introduced FullHttp[Request|Response] to HTTP/2 frame events.
Introduced new unit tests to support new code.
Updated HTTP/2 client example to use new code.
Miscelaneous updates and bug fixes made to support new code.
Motivation:
It is often very expensive to instantiate an exception. TextHeader
should not raise an exception when it failed to find a header or when
its header value is not valid.
Modification:
- Change the return type of the getter methods to Integer and Long so
that null is returned when no header is found or its value is invalid
- Update Javadoc
Result:
- Fixes#2758
- No unnecessary instantiation of exceptions
Related issue: #2649 and #2745
Motivation:
At the moment there is no way to get and remove a header with one call.
This means you need to search the headers two times. We should add
getAndRemove(...) to allow doing so with one call.
Modifications:
Add getAndRemove(...) and getUnconvertedAndRemove(...) and their
variants
Result:
More efficient API
Motivation:
At the moment it's only possible for a user to set the RecvByteBufAllocator for a Channel but not access the Handle once it is assigned. This makes it hard to write more flexible implementations.
Modifications:
Add a new method to the Channel.Unsafe to allow access the the used Handle for the Channel. The RecvByteBufAllocator.Handle is created lazily.
Result:
It's possible to write more flexible implementatons that allow to adjust stuff on the fly for a Handle that is used by a Channel
Motivation:
Sometimes ChannelHandler need to queue writes to some point and then process these. We currently have no datastructure for this so the user will use an Queue or something like this. The problem is with this Channel.isWritable() will not work as expected and so the user risk to write to fast. That's exactly what happened in our SslHandler. For this purpose we need to add a special datastructure which will also take care of update the Channel and so be sure that Channel.isWritable() works as expected.
Modifications:
- Add PendingWriteQueue which can be used for this purpose
- Make use of PendingWriteQueue in SslHandler
Result:
It is now possible to queue writes in a ChannelHandler and still have Channel.isWritable() working as expected. This also fixes#2752.
Motivation:
After a channel is created it's usually assigned to an
EventLoop. During the lifetime of a Channel the
EventLoop is then responsible for processing all I/O
and compute tasks of the Channel.
For various reasons (e.g. load balancing) a user might
require the ability for a Channel to be assigned to
another EventLoop during its lifetime.
Modifications:
Introduce under the hood changes that ensure that Netty's
thread model is obeyed during and after the deregistration
of a channel.
Ensure that tasks (one time and periodic) are executed by
the right EventLoop at all times.
Result:
A Channel can be deregistered from one and re-registered with
another EventLoop.
Related issue: #2250
Motivation:
Prior to this commit, Netty's non blocking EventLoops
were each assigned a fixed thread by which all of the
EventLoop's I/O and handler logic would be performed.
While this is a fine approach for most users of Netty,
some advanced users require more flexibility in
scheduling the EventLoops.
Modifications:
Remove all direct usages of threads in MultithreadEventExecutorGroup,
SingleThreadEventExecutor et al., and introduce an Executor
abstraction instead.
The way to think about this change is, that each
iteration of an eventloop is now a task that gets scheduled
in a ForkJoinPool.
While the ForkJoinPool is the default, one also has the
ability to plug in his/her own Executor (aka thread pool)
into a EventLoop(Group).
Result:
Netty hands off thread management to a ForkJoinPool by default.
Users can also provide their own thread pool implementation and
get some control over scheduling Netty's EventLoops
In Netty 3, downstream writes of SPDY data frames and upstream reads of
SPDY window udpate frames occur on different threads.
When receiving a window update frame, we synchronize on a java object
(SpdySessionHandler::flowControlLock) while sending any pending writes
that are now able to complete.
When writing a data frame, we check the send window size to see if we
are allowed to write it to the socket, or if we have to enqueue it as a
pending write. To prevent races with the window update frame, this is
also synchronized on the same SpdySessionHandler::flowControlLock.
In Netty 4, upstream and downstream operations on any given channel now
occur on the same thread. Since java locks are re-entrant, this now
allows downstream writes to occur while processing window update frames.
In particular, when we receive a window update frame that unblocks a
pending write, this write completes which triggers an event notification
on the response, which in turn triggers a write of a data frame. Since
this is on the same thread it re-enters the lock and modifies the send
window. When the write completes, we continue processing pending writes
without knowledge that the window size has been decremented.
Motivation:
The calculation of the max wait time for HashedWheelTimerTest.testExecutionOnTime() was wrong and so the test sometimes failed.
Modifications:
Fix the max wait time.
Result:
No more test-failures
Related issue: #2743
Motivation:
When there are more than one stream with the same priority, the set
returned by SpdySession.getActiveStream() will not include all of them,
because it uses TreeSet and only compares the priority of streams. If
two different streams have the same priority, one of them will be
discarded by TreeSet.
Modification:
- Rename getActiveStreams() to activeStreams()
- Replace PriorityComparator with StreamComparator
Result:
Two different streams with the same priority are compared correctly.
Motivation:
We forgot to do a null check on the cause parameter of
ChannelFuture.setFailure(cause)
Modifications:
Add a null check
Result:
Fixed issue: #2728
Motivation:
We did various changes related to the ChannelOutboundBuffer in 4.0 branch. This commit port all of them over and so make sure our branches are synced in terms of these changes.
Related to [#2734], [#2709], [#2729], [#2710] and [#2693] .
Modification:
Port all changes that was done on the ChannelOutboundBuffer.
This includes the port of the following commits:
- 73dfd7c01b
- 997d8c32d2
- e282e504f1
- 5e5d1a58fd
- 8ee3575e72
- d6f0d12a86
- 16e50765d1
- 3f3e66c31a
Result:
- Less memory usage by ChannelOutboundBuffer
- Same code as in 4.0 branch
- Make it possible to use ChannelOutboundBuffer with Channel implementation that not extends AbstractChannel
Motivation:
If the requests contains uri parameters but not path the HttpRequestEncoder does produce an invalid uri while try to add the missing path.
Modifications:
Correctly handle the case of uri with paramaters but no path.
Result:
HttpRequestEncoder produce correct uri in all cases.
Related issue: #2733
Motivation:
Unlike OpenSsl, Epoll lacks a couple useful availability checker
methods:
- ensureAvailability()
- unavailabilityCause()
Modifications:
Add missing methods
Result:
More ways to check the availability and to get the cause of
unavailability programatically.
Motivation:
Currently it is not possible to load an encrypted private key when
creating a JDK based SSL server context.
Modifications:
- Added static method to JdkSslServerContext which handles key spec generation for (encrypted) private keys and make use of it.
-Added tests for creating a SSL server context based on a (encrypted)
private key.
Result:
It is now possible to create a JDK based SSL server context with an
encrypted (password protected) private key.
Motivation:
There were two buffer leaks in the codec-dns.
Modifications:
- Fix buffer leak in DnsResponseTest.readResponseTest()
- Correctly release DnsResources on Exception
Result:
No more buffer leaks in the codec-dns module.
Motivation:
Before this changes Bzip2BitReader and Bzip2BitWriter accessed to ByteBuf byte by byte. So tests for Bzip2 compression codec takes a lot of time if we ran them with paranoid level of resource leak detection. For more information see comments to #2681 and #2689.
Modifications:
- Increased size of bit buffers from 8 to 64 bits.
- Improved reading and writing operations.
- Save link to incoming ByteBuf inside Bzip2BitReader.
- Added methods to check possible readable bits and bytes in Bzip2BitReader.
- Updated Bzip2 classes to use new API of Bzip2BitReader.
- Added new constants to Bzip2Constants.
Result:
Increased size of bit buffers and improved performance of Bzip2 compression codec (for general work by 13% and for tests with paranoid level of resource leak detection by 55%).
Motivation:
As /proc/sys/net/core/somaxconn does not exists on non-linux platforms you see a noisy stacktrace when debug level is enabled while the static method of NetUtil is executed.
Modifications:
Check if the file exists before try to parse it.
Result:
Less noisy logging on non-linux platforms.
Related issue: #2354
Motivation:
AbstractConstant.compareTo() can return 0 even if the specified constant
object is not the same instance with 'this'.
Modifications:
- Compare the identityHashCode of constant first. If that fails,
allocate a small direct buffer and use its memory address as a unique
value. If the platform does not provide a way to get the memory
address of a direct buffer, use a thread-local random value.
- Signal cannot extend AbstractConstant. Use delegation.
Result:
It is practically impossible for AbstractConstant.compareTo() to return
0 for different constant objects.
Motivation:
We sometimes not use the correct exception message when throw it from the native code.
Modifications:
Fixed the message.
Result:
Correct message in exception
Motivation:
The PID_MAX_LIMIT on 64bit linux systems is 4194304 and on osx it is 99998. At the moment we use 65535 as an upper-limit which is too small.
Modifications:
Use 4194304 as max possible value
Result:
No more false-positives when try to detect current pid.
Motivation:
DnsResource.duplicate() should return DnsResource and not ByteBufHolder
Modifications:
Change return type from ByteBufHolder to DnsResource
Result:
No need to cast to the correct type when using duplicate()
Motivation:
We have some inconsistency when handling writes. Sometimes we call ChannelOutboundBuffer.progress(...) also for complete writes and sometimes not. We should call it always.
Modifications:
Correctly call ChannelOuboundBuffer.progress(...) for complete and incomplete writes.
Result:
Consistent behavior
Motivation:
While optimize gathering writes I introduced a bug when writing single ByteBuf that have a memoryAddress. This regression was introduced by 88bd6e7a93.
Modifications:
Correctly use the writerIndex as argument when call Native.writeAddress(...)
Result:
No more corruption while write single buffers.
Motivation:
While benchmarking the native transport with gathering writes I noticed that it is quite slow. This is due the fact that we need to do a lot of array copies to get the buffers into the iov array.
Modification:
Introduce a new class calles IovArray which allows to fill buffers directly in a iov array that can be passed over to JNI without any array copies. This gives a nice optimization in terms of speed when doing gathering writes.
Result:
Big performance improvement when doing gathering writes. See the included benchmark...
Before:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 23.44ms 16.37ms 259.57ms 91.77%
Req/Sec 181.99k 31.69k 304.60k 78.12%
346544071 requests in 2.00m, 46.48GB read
Requests/sec: 2887885.09
Transfer/sec: 396.59MB
With this change:
[nmaurer@xxx]~% wrk/wrk -H 'Host: localhost' -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' -H 'Connection: keep-alive' -d 120 -c 256 -t 16 --pipeline 256 http://xxx:8080/plaintext
Running 2m test @ http://xxx:8080/plaintext
16 threads and 256 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 21.93ms 16.33ms 305.73ms 92.34%
Req/Sec 194.56k 33.75k 309.33k 77.04%
369617503 requests in 2.00m, 49.57GB read
Requests/sec: 3080169.65
Transfer/sec: 423.00MB
Motivation:
Due some race-condition while handling canellation of TimerTasks it was possibleto corrupt the linked-list structure that is represent by HashedWheelBucket and so produce a NPE.
Modification:
Fix the problem by adding another MpscLinkedQueue which holds the cancellation tasks and process them on each tick. This allows to use no synchronization / locking at all while introduce a latency of max 1 tick before the TimerTask can be GC'ed.
Result:
No more NPE
Motivation:
In ReplayingDecoder / ByteToMessageDecoder channelInactive(...) method we try to decode a last time and fire all decoded messages throw the pipeline before call ctx.fireChannelInactive(...). To keep the correct order of events we also need to call ctx.fireChannelReadComplete() if we read anything.
Modifications:
- Channel channelInactive(...) to call ctx.fireChannelReadComplete() if something was decoded
- Move out.recycle() to finally block
Result:
Correct order of events.
Related issue: #2688
- DnsClass and DnsType
- Make DnsClass and DnsType implement Comparable
- Optimize the message generation of IllegalArgumentException,
by pre-populating the list of the expected parameters
- Move the static methods up
- Relax the validation rule of DnsClass so that a user can define a
CLASS which is not listed in the RFC 1035
- valueOf(int) does not throw IllegalArgumentException anymore as long
as the specified value is an unsigned short.
- Rename create() and forName() to valueOf() so that they look like a
real enum
- Rename type() and clazz() to intValue() so that they conform to our
naming convention
- Add missing null checks in DnsEntry
Motivation:
DNS class and type were represented as integers rather than an enum or a
similar dedicated value type. This can be a potential source of a
parameter order bug which might be difficult to track down.
Modifications:
Add DnsClass and DnsType to replace integer parameters
Result:
Type safety and less error-proneness