Motivation:
Data flowing in from the decoder flows out in sequence,Whether decoder removed or not.
Modification:
fire data in out and clear out when hander removed
before call method handlerRemoved(ctx)
Result:
Fixes#9668 .
Motivation:
At the moment we do a ByteBuf.readBytes(...) on removal of the ByteToMessageDecoder if there are any bytes left and forward the returned ByteBuf to the next handler in the pipeline. This is not really needed as we can just forward the cumulation buffer directly and so eliminate the extra memory copy
Modifications:
Just forward the cumulation buffer directly on removal of the ByteToMessageDecoder
Result:
Less memory copies
Motivation:
In the current implementation of Base64 decoder an invalid
character `\u00BD` treated as `=`.
Also character `\u007F` leads to ArrayIndexOutOfBoundsException.
Modification:
Explicitly checks that all input bytes are ASCII characters
(greater than zero). Fix `decodabet` tables.
Result:
Correctly validation input bytes in Base64 decoder.
Motivation:
Netty homepage(netty.io) serves both "http" and "https".
It's recommended to use https than http.
Modification:
I changed from "http://netty.io" to "https://netty.io"
Result:
No effects.
Motivation:
There are is some unnecessary code (like toString() calls) which can be cleaned up.
Modifications:
- Remove not needed toString() calls
- Simplify subString(...) calls
- Remove some explicit casts when not needed.
Result:
Cleaner code
Motivation:
There is some manual coping of elements of Collections which can be replaced by Collections.addAll(...) and also some unnecessary semicolons.
Modifications:
- Simplify branches
- Use Collections.addAll
- Code cleanup
Result:
Code cleanup
Motivation:
ByteToMessageDecoder only looks at the last channelRead() in the batch
of channelRead()-s when determining whether or not it should call
ChannelHandlerContext#read() to consume more data when !isAutoRead. This
will lead to read() calls issued unnecessaily and unprompted if the very
last channelRead() didn't result in at least one decoded message, even
if there have been messages decoded from other channelRead()-s in the
current batch.
Modifications:
Track decode outcomes for the entire batch of channelRead() calls and
only issue a read in BTMD if the entire batch of channelRead() calls
yielded no complete messages.
Result:
ByteToMessageDecoder will no longer overread when the very last read
yielded no message, but the batch of reads did.
Motivation:
Lz4FrameEncoder and Lz4FrameDecoder in their default configuration use
an extremely inefficient way to checksum direct byte buffers. In
particular, for every byte checksummed, a single-element byte array is
being allocated and a JNI cal is made, which in some internal testing
makes a 25x difference in total throughput and allocates *a lot* of
garbage.
Modifications:
Lz4XXHash32, an implementation of ByteBufChecksum specifically for use
by Lz4FrameEncoder and Lz4FrameDecoder, is introduced. It utilises
xxHash32 block API which provides a hash() method that accepts a
ByteBuffer as an argument. Lz4FrameEncoder and Lz4FrameDecoder are
modified to use this implementation by default.
Result:
Lz4FrameEncoder and Lz4FrameDecoder perform well again when operating
on direct byte buffers with default checksum configuration; a public
implementation is provided for those who need to override the seed.
Motivation:
ReflectiveByteBufChecksum#update(buf, off, len) ignores provided offset
and length arguments when operating on direct buffers, leading to wrong
byte sequences being checksummed and ultimately incorrect checksum
values (unless checksumming the entire buffer).
Modifications:
Use the provided offset and length arguments to get the correct nio
buffer to checksum; add test coverage exercising the four meaningfully
different offset and length combinations.
Result:
Offset and length are respected and a correct checksum gets calculated;
simple unit test should prevent regressions in the future.
Motivation:
Because of a simple bug in ByteBufChecksum#updateByteBuffer(Checksum),
ReflectiveByteBufChecksum is never used for CRC32 and Adler32, resulting
in direct ByteBuffers being checksummed byte by byte, which is
undesriable.
Modification:
Fix ByteBufChecksum#updateByteBuffer(Checksum) method to pass the
correct argument to Method#invoke(Checksum, ByteBuffer).
Result:
ReflectiveByteBufChecksum will now be used for Adler32 and CRC32 on
Java8+ and direct ByteBuffers will no longer be checksummed on slow
byte-by-byte basis.
Motivation:
It is valid to use null as sender so we should support it when DatagramPacketEncoder checks if it supports the message.
Modifications:
- Add null check
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/9199.
Motivation:
At the moment ByteToMessageDecoder always calls fireChannelReadComplete() when the handler is removed from the pipeline and the cumulation buffer is not null. We should only call it when we also call fireChannelRead(...), which only happens if the cumulation buffer is not null and readable.
Modifications:
Only call fireChannelReadComplete() if fireChannelRead(...) is called before during removal of the handler.
Result:
More correct semantics
Motivation
Pipeline handlers are free to "take control" of input buffers if they have singular refcount - in particular to mutate their raw data if non-readonly via discarding of read bytes, etc.
However there are various places (primarily unit tests) where a wrapped byte-array buffer is passed in and the wrapped array is assumed not to change (used after the wrapped buffer is passed to EmbeddedChannel.writeInbound()). This invalid assumption could result in unexpected errors, such as those exposed by #8931.
Modifications
Anywhere that the data passed to writeInbound() might be used again, ensure that either:
- A copy is used rather than wrapping a shared byte array, or
- The buffer is otherwise protected from modification by making it read-only
For the tests, copying is preferred since it still allows the "mutating" optimizations to be exercised.
Results
Avoid possible errors when pipeline assumes it has full control of input buffer.
Motivation:
OOME is occurred by increasing suppressedExceptions because other libraries call Throwable#addSuppressed. As we have no control over what other libraries do we need to ensure this can not lead to OOME.
Modifications:
Only use static instances of the Exceptions if we can either dissable addSuppressed or we run on java6.
Result:
Not possible to OOME because of addSuppressed. Fixes https://github.com/netty/netty/issues/9151.
Motivation:
DefaultHeaders entries maintains two linked lists. 1 for overall insertion order
and 1 for "in bucket" order. DefaultHeaders#valueIterator removal (introduced in 1d9090aab2) only reliably
removes the entry from the overall insertion order, but may not remove from the
bucket unless the element is the first entry.
Modifications:
- DefaultHeaders$ValueIterator should track 2 elements behind the next entry so
that the single linked "in bucket" list can be patched up when removing the
previous entry.
Result:
More correct DefaultHeaders#valueIterator removal.
Motivation:
While iterating values it is often desirable to be able to remove individual
entries. The existing mechanism to do this involves removal of all entries and
conditional re-insertion which is heavy weight in order to remove a single
value.
Modifications:
- DefaultHeaders$ValueIterator supports removal
Result:
It is possible to remove entries while iterating the values in DefaultHeaders.
Motivation:
32563bfcc1 introduced a regression in which we did now not longer discard the messages after we handled an oversized message.
Modifications:
- Do not set aggregating to false after handleOversizedMessage is called
- Adjust unit tests to verify the behaviour is correct again.
Result:
Fixes https://github.com/netty/netty/issues/9007.
Motivation:
In 42742e233f we already added default methods to Channel*Handler and deprecated the Adapter classes to simplify the class hierarchy. With this change we go even further and merge everything into just ChannelHandler. This simplifies things even more in terms of class-hierarchy.
Modifications:
- Merge ChannelInboundHandler | ChannelOutboundHandler into ChannelHandler
- Adjust code to just use ChannelHandler
- Deprecate old interfaces.
Result:
Cleaner and simpler code in terms of class-hierarchy.
Motivation:
As we now us java8 as minimum java version we can deprecate ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter and just move the default implementations into the interfaces. This makes things a bit more flexible for the end-user and also simplifies the class-hierarchy.
Modifications:
- Mark ChannelInboundHandlerAdapter and ChannelOutboundHandlerAdapter as deprecated
- Add default implementations to ChannelInboundHandler / ChannelOutboundHandler
- Refactor our code to not use ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter anymore
Result:
Cleanup class-hierarchy and make things a bit more flexible.
Motivation:
PromiseCombiner is not thread-safe and even assumes all added Futures are using the same EventExecutor. This is kind of fragile as we do not enforce this. We need to enforce this contract to ensure it's safe to use and easy to spot concurrency problems.
Modifications:
- Add new contructor to PromiseCombiner that takes an EventExecutor and deprecate the old non-arg constructor.
- Check if methods are called from within the EventExecutor thread and if not fail
- Correctly dispatch on the right EventExecutor if the Future uses a different EventExecutor to eliminate concurrency issues.
Result:
More safe use of PromiseCombiner + enforce correct usage / contract.
Motivation:
We can just use Objects.requireNonNull(...) as a replacement for ObjectUtil.checkNotNull(....)
Modifications:
- Use Objects.requireNonNull(...)
Result:
Less code to maintain.
Motivation:
Just was looking through code and found 1 interesting place DateFormatter.tryParseMonth that was not very effective, so I decided to optimize it a bit.
Modification:
Changed DateFormatter.tryParseMonth method. Instead of invocation regionMatch() for every month - compare chars one by one.
Result:
DateFormatter.parseHttpDate method performance improved from ~3% to ~15%.
Benchmark (DATE_STRING) Mode Cnt Score Error Units
DateFormatter2Benchmark.parseHttpHeaderDateFormatter Sun, 27 Jan 2016 19:18:46 GMT thrpt 6 4142781.221 ± 82155.002 ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatter Sun, 27 Dec 2016 19:18:46 GMT thrpt 6 3781810.558 ± 38679.061 ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew Sun, 27 Jan 2016 19:18:46 GMT thrpt 6 4372569.705 ± 30257.537 ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew Sun, 27 Dec 2016 19:18:46 GMT thrpt 6 4339785.100 ± 57542.660 ops/s
Motivation
Implementations of MessageAggregator (HttpObjectAggregator in particular) may wish to
selectively aggrerage requests and responses on a case-by-case basis such as for example
only POST requests or only responses of a certain content-type.
Modifications
Adding a flag to MessageAggregator that toggles between true/false depending on if aggregation
is desired for the current message or not.
Result
Fixes#8772
Motivation:
ChannelHandler.exceptionCaught(...) was marked as @deprecated as it should only exist in inbound handlers.
Modifications:
Remove ChannelHandler.exceptionCaught(...) and adjust code / tests.
Result:
Fixes https://github.com/netty/netty/issues/8527
Motivation:
We have a utility method to check for > 0 and >0 arguments. We should use it.
Modification:
use checkPositive/checkPositiveOrZero instead of if statement.
Result:
Re-use utility method.
Motivation:
We can use lambdas now as we use Java8.
Modification:
use lambda function for all package, #8751 only migrate transport package.
Result:
Code cleanup.
Motivation:
As netty 4.x supported Java 6 we had various if statements to check for java versions < 8. We can remove these now.
Modification:
Remove unnecessary if statements that check for java versions < 8.
Result:
Cleanup code.
Motivation:
We need to update to a new checkstyle plugin to allow the usage of lambdas.
Modifications:
- Update to new plugin version.
- Fix checkstyle problems.
Result:
Be able to use checkstyle plugin which supports new Java syntax.
* Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization
Motiviation:
As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement.
Modifications:
This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction.
Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like:
* Adding instrumentation / metrics:
* how many Channels are registered on an SingleThreadEventLoop
* how many Channels were handled during the IO processing in an EventLoop run
* how many task were handled during the last EventLoop / EventExecutor run
* how many outstanding tasks we have
...
...
* Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics.
* Use different Promise / Future / ScheduledFuture implementations
* decorate Runnable / Callables when submitted to the EventExecutor / EventLoop
As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes:
* AbstractEventLoop
* AbstractEventLoopGroup
* EventExecutorChooser
* EventExecutorChooserFactory
* DefaultEventLoopGroup
* DefaultEventExecutor
* DefaultEventExecutorGroup
Result:
Fixes https://github.com/netty/netty/issues/8514 .
Motivation:
Custom Netty ThreadLocalRandom and ThreadLocalRandomProvider classes are no longer needed and can be removed.
Modification:
Remove own ThreadLocalRandom
Result:
Less code to maintain
Motivation:
PlatformDependent.newConcurrentHashMap() is no longer needed so it could be easily removed and new ConcurrentHashMap<>() inlined instead of invoking PlatformDependent.newConcurrentHashMap().
Modification:
Use ConcurrentHashMap provided by the JDK directly.
Result:
Less code to maintain.
Motivation:
We can use the diamond operator these days.
Modification:
Use diamond operator whenever possible.
Result:
More modern code and less boiler-plate.
Motivation:
Since Java 7 we can automatically close resources in try () construction.
Modification:
Changed all try catches in the code with autoclose try (resource)
Result:
Less boiler-plate
Motivation:
While we are not yet quite sure if we want to require Java11 as minimum we are at least sure we want to use java8 as minimum.
Modifications:
Change minimum version to java8 and update some tests which failed compilation after this change.
Result:
Use Java8 as minimum and be able to use Java8 features.
Motivation:
LineBasedFrameDecoder, JsonObjectDecoder and XmlFrameDecoder upon investigation of the
sourcecode appeared to only support ASCII or UTF-8 input. It is an important characteristic
and ont reflected in any documentation. This could lead to improper usage and bugs.
Modifications:
Javadoc comment is addedd to all three classes to state that implementation is only
compatible with UTF-8 or ASCII input streams and brifly touches on implementaion details.
Result:
The end user of the netty library would not have to study sorcecode to deterime character
encoding limitations for given classes.
Motivation:
ByteBuf supports “marker indexes”. The intended use case for these is if a speculative operation (e.g. decode) is in process the user can “mark” and interface and refer to it later if the operation isn’t successful (e.g. not enough data). However this is rarely used in practice,
requires extra memory to maintain, and introduces complexity in the state management for derived/pooled buffer initialization, resizing, and other operations which may modify reader/writer indexes.
Modifications:
Remove support for marking and adjust testcases / code.
Result:
Fixes https://github.com/netty/netty/issues/8535.
Motivation:
We had some typo (most likely caused by copy-and-paste) in the api docs which should be fixed.
Modifications:
Replace encoder by decoder word.
Result:
Correct apidocs.
Motivation:
Most of the maven modules do not explicitly declare their
dependencies and rely on transitivity, which is not always correct.
Modifications:
For all maven modules, add all of their dependencies to pom.xml
Result:
All of the (essentially non-transitive) depepdencies of the modules are explicitly declared in pom.xml
Motivation:
If the encoder needs to flush more than one outbound message it will
create a new ChannelPromise for all but the last write which will
swallow failures.
Modification:
Use a PromiseCombiner in the case of multiple messages and the parent
promise isn't the `VoidPromise`.
Result:
Intermediate failures are propagated to the original ChannelPromise.
Motivation:
There are currently many more places where this could be used which were
possibly not considered when the method was added.
If https://github.com/netty/netty/pull/8388 is included in its current
form, a number of these places could additionally make use of the same
BYTE_ARRAYS threadlocal.
There's also a couple of adjacent places where an optimistically-pooled
heap buffer is used for temp byte storage which could use the
threadlocal too in preference to allocating a temp heap bytebuf wrapper.
For example
https://github.com/netty/netty/blob/4.1/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java#L1417.
Modifications:
Replace new byte[] with PlatformDependent.allocateUninitializedArray()
where appropriate; make use of ByteBufUtil.getBytes() in some places
which currently perform the equivalent logic, including avoiding copy of
backing array if possible (although would be rare).
Result:
Further potential speed-up with java9+ and appropriate compile flags.
Many of these places could be on latency-sensitive code paths.
* Optimize AbstractByteBuf.getCharSequence() in US_ASCII case
Motivation:
Inspired by https://github.com/netty/netty/pull/8388, I noticed this
simple optimization to avoid char[] allocation (also suggested in a TODO
here).
Modifications:
Return an AsciiString from AbstractByteBuf.getCharSequence() if
requested charset is US_ASCII or ISO_8859_1 (latter thanks to
@Scottmitch's suggestion). Also tweak unit tests not to require Strings
and include a new benchmark to demonstrate the speedup.
Result:
Speed-up of AbstractByteBuf.getCharSequence() in ascii and iso 8859/1
cases