When a ChannelOutboundBuffer contains ByteBufs followed by a FileRegion,
removeBytes() will fail with a ClassCastException. It should break the
loop instead.
f31c630c8c was causing
SocketGatheringWriteTest to fail because it does not take the case where
an empty buffer exists in a gathering write.
When there is an empty buffer in a gathering write, the number of
buffers returned by ChannelOutboundBuffer.nioBuffer() and the actual
number of write attemps can differ.
To remove the write requests correctly, a byte transport must use
ChannelOutboundBuffer.removeBytes()
Motivation:
Because of an incorrect logic in teh EmbeddedChannel constructor it is not possible to use EmbeddedChannel with a ChannelInitializer as constructor argument. This is because it adds the internal LastInboundHandler to its ChannelPipeline before it register itself to the EventLoop.
Modifications:
First register self to EventLoop before add LastInboundHandler to the ChannelPipeline.
Result:
It's now possible to use EmbeddedChannel with ChannelInitializer.
Motivation:
Due a regression NioSocketChannel.doWrite(...) will throw a ClassCastException if you do something like:
channel.write(bytebuf);
channel.write(fileregion);
channel.flush();
Modifications:
Correctly handle writing of different message types by using the correct message count while loop over them.
Result:
No more ClassCastException
Related issue: #2741 and #2151
Motivation:
There is no way for ChunkedWriteHandler to know the progress of the
transfer of a ChannelInput. Therefore, ChannelProgressiveFutureListener
cannot get exact information about the progress of the transfer.
If you add a few methods that optionally provides the transfer progress
to ChannelInput, it becomes possible for ChunkedWriteHandler to notify
ChannelProgressiveFutureListeners.
If the input has no definite length, we can still use the progress so
far, and consider the length of the input as 'undefined'.
Modifications:
- Add ChunkedInput.progress() and ChunkedInput.length()
- Modify ChunkedWriteHandler to use progress() and length() to notify
the transfer progress
Result:
ChunkedWriteHandler now notifies ChannelProgressiveFutureListener.
- SocksV[45] -> Socks[45]
- Make encodeAsByteBuf package private with some hassle
- Split SocksMessageEncoder into Socks4MessageEncoder and
Socks5MessageEncoder, and remove the original
- Remove lazy singleton instantiation; we don't need it.
- Remove the deprecated methods
- Fix Javadoc errors
Motivation:
SOCKS 4 and 5 are very different protocols although they share the same
name. It is not possible to incorporate the two protocol versions into
a single package.
Modifications:
- Add a new package called 'socksx' to supercede 'socks' package.
- Add SOCKS 4/4a support to the 'socksx' package
Result:
codec-socks now supports all SOCKS versions
Related issue: #2407
Motivation:
The current fallback SOMAXCONN value is 3072. It is way too large
comparing to the default SOMAXCONN value of popular OSes.
Modifications:
Decrease the fallback SOMAXCONN value to 128 or 200 depending on the
current OS
Result:
Saner fallback value
Related issue: #2766
Motivation:
Forgot to rename them before the final release by mistake.
Modifications:
Rename and then re-introduce the deprecated version that extends the
renamed class.
Result:
Better naming
Motivation:
LZ4 compression codec provides sending and receiving data encoded by very fast LZ4 algorithm.
Modifications:
- Added `lz4` library which implements LZ4 algorithm.
- Implemented Lz4FramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the Lz4FramedEncoder and how it can compress data for the next uncompression using the original library.
- Implemented Lz4FramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added tests to verify the Lz4FramedDecoder and how it can uncompress data after compression using the original library.
- Added integration tests for Lz4FramedEncoder/Decoder.
Result:
Full LZ4 compression codec which can compress/uncompress data using LZ4 algorithm.
Motivation:
The _0XFF_0X00 buffer is not duplicated and empty after the first usage preventing the connection close to happen on subsequent close frames.
Modifications:
Correctly duplicate the buffer.
Result:
Multiple CloseWebSocketFrames are handled correctly.
Motivation:
ByteToMessageDecoder and ReplayingDecoder have incorrect javadocs in some places.
Modifications:
Fix incorrect javadocs for both classes.
Result:
Correct javadocs for both classes
Related issue: #2764
Motivation:
EpollSocketChannel.writeFileRegion() does not handle the case where the
position of a FileRegion is non-zero properly.
Modifications:
- Improve SocketFileRegionTest so that it tests the cases where the file
transfer begins from the middle of the file
- Add another jlong parameter named 'base_off' so that we can take the
position of a FileRegion into account
Result:
Improved test passes. Corruption is gone.
Related issue: #2508
Motivation:
The '<exec/>' task takes unnecessarily long time due to a known issue:
- https://issues.apache.org/bugzilla/show_bug.cgi?id=54128
Modifications:
- Reduce the number of '<exec/>' tasks for faster build
- Use '<propertyregex/>' to extract the output
Result:
Slightly faster build
Related issue: #2028
Motivation:
Some copiedBuffer() methods in Unpooled allocated a direct buffer. An
allocation of a direct buffer is an expensive operation, and thus should
be avoided for unpooled buffers.
Modifications:
- Use heap buffers in all copiedBuffer() methods
Result:
Unpooled.copiedBuffers() are less expensive now.
Motivation:
The previous fix did disable the caching of ByteBuffers completely which can cause performance regressions. This fix makes sure we use nioBuffers() for all writes in NioSocketChannel and so prevent data-corruptions. This is still kind of a workaround which will be replaced by a more fundamental fix later.
Modifications:
- Revert 4059c9f354
- Use nioBuffers() for all writes to prevent data-corruption
Result:
No more data-corruption but still retain the original speed.
Motivation:
While porting some changes from 4.0 to 4.1 and master branch I changed the default allocator from pooled to unpooled by mistake. This should be reverted. The guilty commit is 4a3ef90381.
Thanks to @blucas for spotting this.
Modifications:
Revert changes related to allocator.
Result:
Use the correct default allocator again.
Motivation:
At the moment we expand the ByteBuffer[] when we have more then 1024 ByteBuffer to write and replace the stored instance in its FastThreadLocal. This is not needed and may even harm performance on linux as IOV_MAX is 1024 and so this may cause the JVM to do an array copy.
Modifications:
Just exit the nioBuffers() method if we can not fit more ByteBuffer in the array. This way we will pick them up on the next call.
Result:
Remove uncessary array copy and simplify the code.
Motivation:
We cache the ByteBuffers in ChannelOutboundBuffer.nioBuffers() for the Entries in the ChannelOutboundBuffer to reduce some overhead. The problem is this can lead to data-corruption if an incomplete write happens and next time we try to do a non-gathering write.
To fix this we should remove the caching which does not help a lot anyway and just make the code buggy.
Modifications:
Remove the caching of ByteBuffers.
Result:
No more data-corruption.
Motivation:
Currently Traffic Shaping is using 1 timer only and could lead to
"partial" wrong bandwidth computation when "short" time occurs between
adding used bytes and when the TrafficCounter updates itself and finally
when the traffic is computed.
Indeed, the TrafficCounter is updated every x delay and it is at the
same time saved into "lastXxxxBytes" and set to 0. Therefore, when one
request the counter, it first updates the TrafficCounter with the added
used bytes. If this value is set just before the TrafficCounter is
updated, then the bandwidth computation will use the TrafficCounter with
a "0" value (this value being reset once the delay occurs). Therefore,
the traffic shaping computation is wrong in rare cases.
Secondly the traffic shapping should avoid if possible the "Timeout"
effect by not stopping reading or writing more than a maxTime, this
maxTime being less than the TimeOut limit.
Thirdly the traffic shapping in read had an issue since the readOp
was not set but should, turning in no read blocking from socket
point of view.
Modifications:
The TrafficCounter has 2 new methods that compute the time to wait
according to read or write) using in priority the currentXxxxBytes (as
before), but could used (if current is at 0) the lastXxxxxBytes, and
therefore having more chance to take into account the real traffic.
Moreover the Handler could change the default "max time to wait", which
is by default set to half of "standard" Time Out (30s:2 = 15s).
Finally we add the setAutoRead(boolean) accordingly to the situation,
as proposed in #2696 (this pull request is in error for unknown reason).
Result:
The Traffic Shaping is better take into account (no 0 value when it
shouldn't) and it tries to not block traffic more than Time Out event.
Moreover the read is really stopped from socket point of view.
This version is similar to #2388 and #2450.
This version is for V4.1, and includes the #2696 pull request
to ease the merge process.
It is compatible with master too.
Including also #2748
The test minimizes time check by reducing to 66ms steps (55s).
Motivation:
FastLZ compression codec provides sending and receiving data encoded by fast FastLZ algorithm using block mode.
Modifications:
- Added part of `jfastlz` library which implements FastLZ algorithm. See FastLz class.
- Implemented FastLzFramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Implemented FastLzFramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added integration tests for `FastLzFramedEncoder/Decoder`.
Result:
Full FastLZ compression codec which can compress/uncompress data using FastLZ algorithm.
Motivation:
It is often very expensive to instantiate an exception. TextHeader
should not raise an exception when it failed to find a header or when
its header value is not valid.
Modification:
- Change the return type of the getter methods to Integer and Long so
that null is returned when no header is found or its value is invalid
- Update Javadoc
Result:
- Fixes#2758
- No unnecessary instantiation of exceptions
Motivation:
DefaultTextHeaders.getAll*() methods create an ArrayList whose initial
capacity is 4. However, it is more likely that the actual number of
values is smaller than that.
Modifications:
Reduce the initial capacity of the value list from 4 to 2
Result:
Slightly reduced memory footprint
Related issue: #2649 and #2745
Motivation:
At the moment there is no way to get and remove a header with one call.
This means you need to search the headers two times. We should add
getAndRemove(...) to allow doing so with one call.
Modifications:
Add getAndRemove(...) and getUnconvertedAndRemove(...) and their
variants
Result:
More efficient API
Motivation:
At the moment it's only possible for a user to set the RecvByteBufAllocator for a Channel but not access the Handle once it is assigned. This makes it hard to write more flexible implementations.
Modifications:
Add a new method to the Channel.Unsafe to allow access the the used Handle for the Channel. The RecvByteBufAllocator.Handle is created lazily.
Result:
It's possible to write more flexible implementatons that allow to adjust stuff on the fly for a Handle that is used by a Channel
Motivation:
Sometimes ChannelHandler need to queue writes to some point and then process these. We currently have no datastructure for this so the user will use an Queue or something like this. The problem is with this Channel.isWritable() will not work as expected and so the user risk to write to fast. That's exactly what happened in our SslHandler. For this purpose we need to add a special datastructure which will also take care of update the Channel and so be sure that Channel.isWritable() works as expected.
Modifications:
- Add PendingWriteQueue which can be used for this purpose
- Make use of PendingWriteQueue in SslHandler
Result:
It is now possible to queue writes in a ChannelHandler and still have Channel.isWritable() working as expected. This also fixes#2752.
In Netty 3, downstream writes of SPDY data frames and upstream reads of
SPDY window udpate frames occur on different threads.
When receiving a window update frame, we synchronize on a java object
(SpdySessionHandler::flowControlLock) while sending any pending writes
that are now able to complete.
When writing a data frame, we check the send window size to see if we
are allowed to write it to the socket, or if we have to enqueue it as a
pending write. To prevent races with the window update frame, this is
also synchronized on the same SpdySessionHandler::flowControlLock.
In Netty 4, upstream and downstream operations on any given channel now
occur on the same thread. Since java locks are re-entrant, this now
allows downstream writes to occur while processing window update frames.
In particular, when we receive a window update frame that unblocks a
pending write, this write completes which triggers an event notification
on the response, which in turn triggers a write of a data frame. Since
this is on the same thread it re-enters the lock and modifies the send
window. When the write completes, we continue processing pending writes
without knowledge that the window size has been decremented.
Motivation:
The calculation of the max wait time for HashedWheelTimerTest.testExecutionOnTime() was wrong and so the test sometimes failed.
Modifications:
Fix the max wait time.
Result:
No more test-failures
Related issue: #2743
Motivation:
When there are more than one stream with the same priority, the set
returned by SpdySession.getActiveStream() will not include all of them,
because it uses TreeSet and only compares the priority of streams. If
two different streams have the same priority, one of them will be
discarded by TreeSet.
Modification:
- Rename getActiveStreams() to activeStreams()
- Replace PriorityComparator with StreamComparator
Result:
Two different streams with the same priority are compared correctly.
Motivation:
We forgot to do a null check on the cause parameter of
ChannelFuture.setFailure(cause)
Modifications:
Add a null check
Result:
Fixed issue: #2728
Motivation:
We did various changes related to the ChannelOutboundBuffer in 4.0 branch. This commit port all of them over and so make sure our branches are synced in terms of these changes.
Related to [#2734], [#2709], [#2729], [#2710] and [#2693] .
Modification:
Port all changes that was done on the ChannelOutboundBuffer.
This includes the port of the following commits:
- 73dfd7c01b
- 997d8c32d2
- e282e504f1
- 5e5d1a58fd
- 8ee3575e72
- d6f0d12a86
- 16e50765d1
- 3f3e66c31a
Result:
- Less memory usage by ChannelOutboundBuffer
- Same code as in 4.0 branch
- Make it possible to use ChannelOutboundBuffer with Channel implementation that not extends AbstractChannel
Motivation:
If the requests contains uri parameters but not path the HttpRequestEncoder does produce an invalid uri while try to add the missing path.
Modifications:
Correctly handle the case of uri with paramaters but no path.
Result:
HttpRequestEncoder produce correct uri in all cases.
Related issue: #2733
Motivation:
Unlike OpenSsl, Epoll lacks a couple useful availability checker
methods:
- ensureAvailability()
- unavailabilityCause()
Modifications:
Add missing methods
Result:
More ways to check the availability and to get the cause of
unavailability programatically.
Motivation:
Currently it is not possible to load an encrypted private key when
creating a JDK based SSL server context.
Modifications:
- Added static method to JdkSslServerContext which handles key spec generation for (encrypted) private keys and make use of it.
-Added tests for creating a SSL server context based on a (encrypted)
private key.
Result:
It is now possible to create a JDK based SSL server context with an
encrypted (password protected) private key.
Motivation:
There were two buffer leaks in the codec-dns.
Modifications:
- Fix buffer leak in DnsResponseTest.readResponseTest()
- Correctly release DnsResources on Exception
Result:
No more buffer leaks in the codec-dns module.
Motivation:
Before this changes Bzip2BitReader and Bzip2BitWriter accessed to ByteBuf byte by byte. So tests for Bzip2 compression codec takes a lot of time if we ran them with paranoid level of resource leak detection. For more information see comments to #2681 and #2689.
Modifications:
- Increased size of bit buffers from 8 to 64 bits.
- Improved reading and writing operations.
- Save link to incoming ByteBuf inside Bzip2BitReader.
- Added methods to check possible readable bits and bytes in Bzip2BitReader.
- Updated Bzip2 classes to use new API of Bzip2BitReader.
- Added new constants to Bzip2Constants.
Result:
Increased size of bit buffers and improved performance of Bzip2 compression codec (for general work by 13% and for tests with paranoid level of resource leak detection by 55%).