Commit Graph

246 Commits

Author SHA1 Message Date
Nick Hill
019cdca40b Other ByteToMessageDecoder streamlining (#9891)
Motivation

This PR is a reduced-scope replacement for #8931. It doesn't include the
changes related to how/when discarding read bytes is done, which we plan
to address in subsequent updates.

Modifications

- Avoid copying bytes in COMPOSITE_CUMULATOR in all cases, performing a
shallow copy where necessary; also guard against (unusual) case where
input buffer is composite with writer index != capacity
- Ensure we don't pass a non-contiguous buffer when MERGE_CUMULATOR is
used
- Manually inline some calls to ByteBuf#writeBytes(...) to eliminate
redundant checks and reduce stack depth

Also includes prior minor review comments from @trustin

Result

More correct handling of merge/composite cases and
more efficient handling of composite case.
2019-12-23 14:01:30 +01:00
Norman Maurer
0e4c073bcf
Remove the intermediate List from ByteToMessageDecoder (and sub-class… (#8626)
Motivation:

ByteToMessageDecoder requires using an intermediate List to put results into. This intermediate list adds overhead (memory/CPU) which grows as the number of objects increases. This overhead can be avoided by directly propagating events through the ChannelPipeline via ctx.fireChannelRead(...). This also makes the semantics more clear and allows us to keep track if we need to call ctx.read() in all cases.

Modifications:

- Remove List from the method signature of ByteToMessageDecoder.decode(...) and decodeLast(...)
- Adjust all sub-classes
- Adjust unit tests
- Fix javadocs.

Result:

Adjust ByteToMessageDecoder as noted in https://github.com/netty/netty/issues/8525.
2019-12-16 21:00:32 +01:00
Scott Mitchell
bf0bd9aac9 SnappyFrameDecoderTest ByteBuf leak (#9854)
Motivation:
SnappyFrameDecoderTest has a few tests which fail to close the EmbeddedChannel
and therefore may leak ByteBuf objects.

Modifications:
- Make sure EmbeddedChannel#finishAndReleaseAll() is called in all tests

Result:
No more leaks from SnappyFrameDecoderTest.
2019-12-08 07:38:53 +01:00
Norman Maurer
bcb6d38026 Correctly close EmbeddedChannel and release buffers in SnappyFrameDecoderTest (#9851)
Motivation:

We did not correctly close the `EmbeddedChannel` which would lead to not have `handlerRemoved(...)` called. This can lead to leaks. Beside this we also did not correctly consume produced data which could also show up as a leak.

Modifications:

- Always call `EmbeddedChannel.finish()`
- Ensure we consume all produced data and release it

Result:

No more leaks in test. This showed up in https://github.com/netty/netty/pull/9850#issuecomment-562504863.
2019-12-06 12:03:19 +01:00
Martin Furmanski
585ed4d08f Improve error handling in ByteToMessageDecoder when expand fails (#9822)
Motivation:

The buffer which the decoder allocates for the expansion can be
leaked if there is a subsequent issue writing to it.

Modifications:
The error handling has been improved so that the new buffer always
is released on failure in the expand.

Result:
The decoder will not leak in this scenario any more.

Fixes: https://github.com/netty/netty/issues/9812
2019-11-28 12:29:05 +01:00
switchYello
402537cab0 fix remove handler cause ByteToMessageDecoder out disorder (#9670)
Motivation:

Data flowing in from the decoder flows out in sequence,Whether decoder removed or not.

Modification:

fire data in out and clear out when hander removed
before call method handlerRemoved(ctx)

Result:

Fixes #9668 .
2019-10-21 10:03:01 +02:00
Nikolay Fedorovskikh
c0f9923823 Fixes validation of input bytes in the Base64 decoder (#9623)
Motivation:
In the current implementation of Base64 decoder an invalid
character `\u00BD` treated as `=`.
Also character `\u007F` leads to ArrayIndexOutOfBoundsException.

Modification:
Explicitly checks that all input bytes are ASCII characters
(greater than zero). Fix `decodabet` tables.

Result:
Correctly validation input bytes in Base64 decoder.
2019-10-10 22:45:44 +04:00
jingene
af614e4d6e Change the netty.io homepage scheme(http -> https) (#9344)
Motivation:

Netty homepage(netty.io) serves both "http" and "https".
It's recommended to use https than http.
Modification:

I changed from "http://netty.io" to "https://netty.io"
Result:

No effects.
2019-07-09 21:10:14 +02:00
jimin
d54678d645 Remove unnecessary code (#9303)
Motivation:

There are is some unnecessary code (like toString() calls) which can be cleaned up.

Modifications:

- Remove not needed toString() calls
- Simplify subString(...) calls
- Remove some explicit casts when not needed.

Result:

Cleaner code
2019-07-04 09:02:24 +02:00
Aleksey Yeschenko
8d43869369 Prevent ByteToMessageDecoder from overreading when !isAutoRead (#9252)
Motivation:

ByteToMessageDecoder only looks at the last channelRead() in the batch
of channelRead()-s when determining whether or not it should call
ChannelHandlerContext#read() to consume more data when !isAutoRead. This
will lead to read() calls issued unnecessaily and unprompted if the very
last channelRead() didn't result in at least one decoded message, even
if there have been messages decoded from other channelRead()-s in the
current batch.

Modifications:

Track decode outcomes for the entire batch of channelRead() calls and
only issue a read in BTMD if the entire batch of channelRead() calls
yielded no complete messages.

Result:

ByteToMessageDecoder will no longer overread when the very last read
yielded no message, but the batch of reads did.
2019-06-28 13:45:56 +02:00
Aleksey Yeschenko
09158a6420 Fix LZ4 encoder/decoder performance with (default) xxHash32 (#9249)
Motivation:

Lz4FrameEncoder and Lz4FrameDecoder in their default configuration use
an extremely inefficient way to checksum direct byte buffers. In
particular, for every byte checksummed, a single-element byte array is
being allocated and a JNI cal is made, which in some internal testing
makes a 25x difference in total throughput and allocates *a lot* of
garbage.

Modifications:

Lz4XXHash32, an implementation of ByteBufChecksum specifically for use
by Lz4FrameEncoder and Lz4FrameDecoder, is introduced. It utilises
xxHash32 block API which provides a hash() method that accepts a
ByteBuffer as an argument. Lz4FrameEncoder and Lz4FrameDecoder are
modified to use this implementation by default.

Result:

Lz4FrameEncoder and Lz4FrameDecoder perform well again when operating
on direct byte buffers with default checksum configuration; a public
implementation is provided for those who need to override the seed.
2019-06-18 09:33:25 +02:00
Aleksey Yeschenko
6390c6f2f6 Fix ReflectiveByteBufChecksum with direct buffers (#9244)
Motivation:

ReflectiveByteBufChecksum#update(buf, off, len) ignores provided offset
and length arguments when operating on direct buffers, leading to wrong
byte sequences being checksummed and ultimately incorrect checksum
values (unless checksumming the entire buffer).

Modifications:

Use the provided offset and length arguments to get the correct nio
buffer to checksum; add test coverage exercising the four meaningfully
different offset and length combinations.

Result:

Offset and length are respected and a correct checksum gets calculated;
simple unit test should prevent regressions in the future.
2019-06-17 20:38:06 +02:00
Norman Maurer
4b8db65b16 Allow null sender when using DatagramPacketEncoder (#9204)
Motivation:

It is valid to use null as sender so we should support it when DatagramPacketEncoder checks if it supports the message.

Modifications:

- Add null check
- Add unit test

Result:

Fixes https://github.com/netty/netty/issues/9199.
2019-06-03 08:44:48 +02:00
Nick Hill
23554e6997 Ensure "full" ownership of msgs passed to EmbeddedChannel.writeInbound() (#9058)
Motivation

Pipeline handlers are free to "take control" of input buffers if they have singular refcount - in particular to mutate their raw data if non-readonly via discarding of read bytes, etc.

However there are various places (primarily unit tests) where a wrapped byte-array buffer is passed in and the wrapped array is assumed not to change (used after the wrapped buffer is passed to EmbeddedChannel.writeInbound()). This invalid assumption could result in unexpected errors, such as those exposed by #8931.

Modifications

Anywhere that the data passed to writeInbound() might be used again, ensure that either:
- A copy is used rather than wrapping a shared byte array, or
- The buffer is otherwise protected from modification by making it read-only

For the tests, copying is preferred since it still allows the "mutating" optimizations to be exercised.

Results

Avoid possible errors when pipeline assumes it has full control of input buffer.
2019-05-22 12:35:03 +02:00
Scott Mitchell
3c11af965a DefaultHeaders#valueIterator doesn't remove from the in bucket list (#9090)
Motivation:
DefaultHeaders entries maintains two linked lists. 1 for overall insertion order
and 1 for "in bucket" order. DefaultHeaders#valueIterator removal (introduced in 1d9090aab2) only reliably
removes the entry from the overall insertion order, but may not remove from the
bucket unless the element is the first entry.

Modifications:
- DefaultHeaders$ValueIterator should track 2 elements behind the next entry so
that the single linked "in bucket" list can be patched up when removing the
previous entry.

Result:
More correct DefaultHeaders#valueIterator removal.
2019-04-27 21:16:04 +02:00
Scott Mitchell
7e469b764e DefaultHeaders#valueIterator to support removal (#9063)
Motivation:
While iterating values it is often desirable to be able to remove individual
entries. The existing mechanism to do this involves removal of all entries and
conditional re-insertion which is heavy weight in order to remove a single
value.

Modifications:
- DefaultHeaders$ValueIterator supports removal

Result:
It is possible to remove entries while iterating the values in DefaultHeaders.
2019-04-16 19:38:08 +02:00
Norman Maurer
0f34345347
Merge ChannelInboundHandler and ChannelOutboundHandler into ChannelHa… (#8957)
Motivation:

In 42742e233f we already added default methods to Channel*Handler and deprecated the Adapter classes to simplify the class hierarchy. With this change we go even further and merge everything into just ChannelHandler. This simplifies things even more in terms of class-hierarchy.

Modifications:

- Merge ChannelInboundHandler | ChannelOutboundHandler into ChannelHandler
- Adjust code to just use ChannelHandler
- Deprecate old interfaces.

Result:

Cleaner and simpler code in terms of class-hierarchy.
2019-03-28 09:28:27 +00:00
Norman Maurer
42742e233f
Deprecate ChannelInboundHandlerAdapter and ChannelOutboundHandlerAdapter (#8929)
Motivation:

As we now us java8 as minimum java version we can deprecate ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter and just move the default implementations into the interfaces. This makes things a bit more flexible for the end-user and also simplifies the class-hierarchy.

Modifications:

- Mark ChannelInboundHandlerAdapter and ChannelOutboundHandlerAdapter as deprecated
- Add default implementations to ChannelInboundHandler / ChannelOutboundHandler
- Refactor our code to not use ChannelInboundHandlerAdapter / ChannelOutboundHandlerAdapter anymore

Result:

Cleanup class-hierarchy and make things a bit more flexible.
2019-03-13 09:46:10 +01:00
Dmitriy Dumanskiy
2e433889b2 Improve DateFormatter parsing performance (#8821)
Motivation:

Just was looking through code and found 1 interesting place DateFormatter.tryParseMonth that was not very effective, so I decided to optimize it a bit.

Modification:

Changed DateFormatter.tryParseMonth method. Instead of invocation regionMatch() for every month - compare chars one by one.

Result:

DateFormatter.parseHttpDate method performance improved from ~3% to ~15%.

Benchmark                                                                (DATE_STRING)   Mode  Cnt        Score       Error  Units
DateFormatter2Benchmark.parseHttpHeaderDateFormatter     Sun, 27 Jan 2016 19:18:46 GMT  thrpt    6  4142781.221 ± 82155.002  ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatter     Sun, 27 Dec 2016 19:18:46 GMT  thrpt    6  3781810.558 ± 38679.061  ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew  Sun, 27 Jan 2016 19:18:46 GMT  thrpt    6  4372569.705 ± 30257.537  ops/s
DateFormatter2Benchmark.parseHttpHeaderDateFormatterNew  Sun, 27 Dec 2016 19:18:46 GMT  thrpt    6  4339785.100 ± 57542.660  ops/s
2019-02-04 10:04:35 +01:00
田欧
6222101924 migrate java8: use lambda and method reference (#8781)
Motivation:

We can use lambdas now as we use Java8.

Modification:

use lambda function for all package, #8751 only migrate transport package.

Result:

Code cleanup.
2019-01-29 14:06:05 +01:00
田欧
e941cbe27a remove unused import statement (#8792)
Motivation:
The code contained some unused import statements.

Modification:
Remove unused import statements.

Result:
Code cleanup
2019-01-28 16:50:15 +01:00
Norman Maurer
310f31b392
Update to new checkstyle plugin (#8777)
Motivation:

We need to update to a new checkstyle plugin to allow the usage of lambdas.

Modifications:

- Update to new plugin version.
- Fix checkstyle problems.

Result:

Be able to use checkstyle plugin which supports new Java syntax.
2019-01-24 16:24:19 +01:00
Norman Maurer
3d6e6136a9
Decouple EventLoop details from the IO handling for each transport to… (#8680)
* Decouble EventLoop details from the IO handling for each transport to allow easy re-use of code and customization

Motiviation:

As today extending EventLoop implementations to add custom logic / metrics / instrumentations is only possible in a very limited way if at all. This is due the fact that most implementations are final or even package-private. That said even if these would be public there are the ability to do something useful with these is very limited as the IO processing and task processing are very tightly coupled. All of the mentioned things are a big pain point in netty 4.x and need improvement.

Modifications:

This changeset decoubled the IO processing logic from the task processing logic for the main transport (NIO, Epoll, KQueue) by introducing the concept of an IoHandler. The IoHandler itself is responsible to wait for IO readiness and process these IO events. The execution of the IoHandler itself is done by the SingleThreadEventLoop as part of its EventLoop processing. This allows to use the same EventLoopGroup (MultiThreadEventLoupGroup) for all the mentioned transports by just specify a different IoHandlerFactory during construction.

Beside this core API change this changeset also allows to easily extend SingleThreadEventExecutor / SingleThreadEventLoop to add custom logic to it which then can be reused by all the transports. The ideas are very similar to what is provided by ScheduledThreadPoolExecutor (that is part of the JDK). This allows for example things like:

  * Adding instrumentation / metrics:
    * how many Channels are registered on an SingleThreadEventLoop
    * how many Channels were handled during the IO processing in an EventLoop run
    * how many task were handled during the last EventLoop / EventExecutor run
    * how many outstanding tasks we have
    ...
    ...
  * Implementing custom strategies for choosing the next EventExecutor / EventLoop to use based on these metrics.
  * Use different Promise / Future / ScheduledFuture implementations
  * decorate Runnable / Callables when submitted to the EventExecutor / EventLoop

As a lot of functionalities are folded into the MultiThreadEventLoopGroup and SingleThreadEventLoopGroup this changeset also removes:

  * AbstractEventLoop
  * AbstractEventLoopGroup
  * EventExecutorChooser
  * EventExecutorChooserFactory
  * DefaultEventLoopGroup
  * DefaultEventExecutor
  * DefaultEventExecutorGroup

Result:

Fixes https://github.com/netty/netty/issues/8514 .
2019-01-23 08:32:05 +01:00
Dmitriy Dumanskiy
7b92ff2500 Java 8 migration. Remove ThreadLocalProvider and inline java.util.concurrent.ThreadLocalRandom.current() where necessary. (#8762)
Motivation:

Custom Netty ThreadLocalRandom and ThreadLocalRandomProvider classes are no longer needed and can be removed.

Modification:

Remove own ThreadLocalRandom

Result:

Less code to maintain
2019-01-22 20:14:28 +01:00
田欧
9d62deeb6f Java 8 migration: Use diamond operator (#8749)
Motivation:

We can use the diamond operator these days.

Modification:

Use diamond operator whenever possible.

Result:

More modern code and less boiler-plate.
2019-01-22 16:07:26 +01:00
Norman Maurer
4a82d107d2
Require Java8 as minimum (#8739)
Motivation:

While we are not yet quite sure if we want to require Java11 as minimum we are at least sure we want to use java8 as minimum.

Modifications:

Change minimum version to java8 and update some tests which failed compilation after this change.

Result:

Use Java8 as minimum and be able to use Java8 features.
2019-01-22 08:48:18 +01:00
Norman Maurer
d9a6cf341c
Remove support for marking reader and writerIndex in ByteBuf to reduce overhead and complexity. (#8636)
Motivation:

ByteBuf supports “marker indexes”. The intended use case for these is if a speculative operation (e.g. decode) is in process the user can “mark” and interface and refer to it later if the operation isn’t successful (e.g. not enough data). However this is rarely used in practice,
requires extra memory to maintain, and introduces complexity in the state management for derived/pooled buffer initialization, resizing, and other operations which may modify reader/writer indexes.

Modifications:

Remove support for marking and adjust testcases / code.

Result:

Fixes https://github.com/netty/netty/issues/8535.
2018-12-11 14:00:49 +01:00
Bryce Anderson
6563f23a9b Don't swallow intermediate write failures in MessageToMessageEncoder (#8454)
Motivation:

If the encoder needs to flush more than one outbound message it will
create a new ChannelPromise for all but the last write which will
swallow failures.

Modification:

Use a PromiseCombiner in the case of multiple messages and the parent
promise isn't the `VoidPromise`.

Result:

Intermediate failures are propagated to the original ChannelPromise.
2018-11-03 10:36:26 +01:00
Nick Hill
583d838f7c Optimize AbstractByteBuf.getCharSequence() in US_ASCII case (#8392)
* Optimize AbstractByteBuf.getCharSequence() in US_ASCII case

Motivation:

Inspired by https://github.com/netty/netty/pull/8388, I noticed this
simple optimization to avoid char[] allocation (also suggested in a TODO
here).

Modifications:

Return an AsciiString from AbstractByteBuf.getCharSequence() if
requested charset is US_ASCII or ISO_8859_1 (latter thanks to
@Scottmitch's suggestion). Also tweak unit tests not to require Strings
and include a new benchmark to demonstrate the speedup.

Result:

Speed-up of AbstractByteBuf.getCharSequence() in ascii and iso 8859/1
cases
2018-10-26 15:32:38 -07:00
Norman Maurer
c546ab20a1
Ensure ByteToMessageDecoder.Cumulator implementations always release in buffer. (#8325)
Motivation:

We need to ensure the Cumulator always releases the input buffer if it can not take over the ownership of it as otherwise it may leak.

Modifications:

- Correctly ensure the buffer is always released.
- Add unit tests.

Result:

Ensure buffer is always released.
2018-09-27 07:38:42 +02:00
Norman Maurer
dc1b511fcf
Correctly reset offset when fail lazy because of too long frame. (#8257)
Motivation:

We need to reset the offset to 0 when we fail lazy because of a too long frame.

Modifications:

- Reset offset
- Add testcase

Result:

Fixes https://github.com/netty/netty/issues/8256.
2018-09-04 19:13:56 +02:00
Norman Maurer
1611acf4ce
Fix CharSequenceValueConverter.convertToByte implementation for AsciiString (#7994)
Motivation:

The implementation of CharSequenceValueConverter.convertToByte did not correctly handle AsciiString if the length != 1.

Modifications:

- Only use fast-path for AsciiString with length of 1.
- Add unit tests.

Result:

Fixes https://github.com/netty/netty/issues/7990
2018-06-01 21:15:08 +02:00
Norman Maurer
1d09efeeb2
Correctly copy existing elements when CodecOutputList.add(index, element) is called. (#7939)
Motivation:

We did not correctly copy elements in some cases when add(index, element) was used.

Modifications:

- Correctly detect when copy is neede and when not.
- Add test case.

Result:

Fixes https://github.com/netty/netty/issues/7938.
2018-05-15 19:41:45 +02:00
ikurovsky
c58069f284 Better handling of streaming JSON data in JsonObjectDecoder (#7821)
Motivation:

When the JsonObjectDecoder determines that the incoming buffer had some data discarded, it resets the internal index to readerIndex and attempts to adjust the state which does not correctly work for streams of JSON objects.

Modifications:

Reset the internal index to the value considering the previous reads.

Result:

JsonObjectDecoder correctly handles streams of both JSON objects and arrays with no state adjustments or repeatable reads.
2018-04-05 07:58:14 +02:00
Norman Maurer
6e6cfa0604 Add tests for EmptyHeaders
Motivation:

6e5fd9311f fixed a bug in EmptyHeaders which was never noticed before because we had no tests.

Modifications:

Add tests for EmptyHeaders.

Result:

EmptyHeaders is tested now.
2018-03-20 15:59:08 +01:00
David Nault
f40ecc3f10 Fix Snappy decoding of large 2-byte literal lengths and copy offsets
Motivation:

The Snappy decoder was failing on valid inputs containing literals
with 2-byte lengths > 0x8000 or copies with 2-byte offsets >= 0x8000.

The decoder was also enforcing an artificially low offset limit of
0x7FFF, something the Snappy format description advises against,
and which prevents decoding valid inputs generated by other encoders.

Modifications:

Interpret 2-byte literal lengths and 2-byte copy offsets as unsigned
shorts, in accordance with the format description and reference
implementation.

Allow any positive offset value. Throw an appropriate exception
for negative values (which can theoretically occur due to arithmetic
overflow on 4-byte offsets, but are unlikely to occur in the wild).

Result:

The Snappy decoder can handle valid inputs that previously caused
it to throw exceptions.
2018-02-20 11:42:23 +01:00
Norman Maurer
756854e99a Correctly implement CharSequenceValueConvert.convertTimeMillis
Motivation:

If you pass the output of CharSequenceValueConvert.convertToTimeMillis to convertTimeMillis it will throw a ParseException.

Modifications:

- Correctly implement CharSequenceValueConverter.convertTimeMillis
- Add unit-tests for CharSequenceValueConverter

Result:

Correctly convert timemillis.
2018-02-16 07:44:13 +01:00
ryu1-sakai
c1d0d88f0a Implement DefaultHeaders.HeaderEntry.equals()
Motivation:

HeaderEntry.equals() inherets Object.equals() which simply check if two objects are the same.
So it returns false even when two HeaderEntry objects have the same name and value.

Modifications:

Implement HeaderEntry.equals() that follows the specification of Map.Entry.equals().
https://docs.oracle.com/javase/9/docs/api/java/util/Map.Entry.html#equals-java.lang.Object-

Result:

HeaderEntry.equals() returns true if two HeaderEntry objects have the same name and value.
2018-02-15 13:07:03 +01:00
Norman Maurer
ad6af3250c DefaultHeaders / CharSequenceValueConverter should treat boolean consistently.
Motivation:

HttpHeaders.getBoolean should return the same truth value for the same string value, regardless of the underlying type.

Modifications:

- Only treat values of true as Boolean.TRUE
- Add unit tests.

Result:

Consistent converting of values for all CharSequence implementations.
2018-02-15 08:37:46 +01:00
Norman Maurer
02b7507a62 Correctly handle the case when converting of value fails and return null or default value.
Motivation:

Headers.get* methods should not throw an exception but return null or the default value if converting of the value fails.

Modifications:

- Correctly handle the case when ValueConverter throws an Exception.
- Add testcase.

Result:

Fixes [#7710].
2018-02-14 08:35:55 +01:00
Abhijit Sarkar
6ff48dcbe3 Fixes #7566 by handling concatenated GZIP streams.
Motivation:
According to RFC 1952, concatenation of valid gzip streams is also a valid gzip stream. JdkZlibDecoder only processed the first and discarded the rest.

Modifications:
- Introduced a constructor argument decompressConcatenated that if true, JdkZlibDecoder would continue to process the stream.

Result:
- If 'decompressConcatenated = true', concatenated streams would be processed in
compliance to RFC 1952.
- If 'decompressConcatenated = false' (default), existing behavior would remain.
2018-01-17 06:10:56 +01:00
Idel Pivnitskiy
50a067a8f7 Make methods 'static' where it possible
Motivation:

Even if it's a super micro-optimization (most JVM could optimize such
 cases in runtime), in theory (and according to some perf tests) it
 may help a bit. It also makes a code more clear and allows you to
 access such methods in the test scope directly, without instance of
 the class.

Modifications:

Add 'static' modifier for all methods, where it possible. Mostly in
test scope.

Result:

Cleaner code with proper 'static' modifiers.
2017-10-21 14:59:26 +02:00
Idel Pivnitskiy
558097449c Add missed 'serialVersionUID' field for Serializable classes
Motivation:

Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.

Modifications:

Add missed 'serialVersionUID' field for all Serializable
classes.

Result:

Proper deserialization of previously serialized objects.
2017-10-21 14:41:18 +02:00
Jackie.Meng
80b8a91b70 Use offset finding eol avoid repeated scaning.
Motivation:

A large frame will be componsed by many packages. Every time the package
arrived, findEndOfLine will be called from the start of the buffer. It
will cause the complexity of reading frame equal to  O(n^2). This can be
eliminated by using a offset to mark the last scan position, when new
package arrived, just find the delimter from the mark. The complexity
will be O(n).

Modification:

Add a offset to mark the last scan position.

Result:

Better performance for read large frame.
2017-09-17 09:17:38 -07:00
Scott Mitchell
44bb3b6f3a DefaultHeaders value iterator
Motivation:
The Headers interface supports an interface to get all the headers values corresponding to a particular name. This API returns a List which requires intermediate storage and increases GC pressure.

Modifications:
- Add a method which returns an iterator over all the values for a specific name

Result:
Ability to iterator over values for a specific name with no intermediate collection.
2017-09-16 16:46:19 -07:00
Norman Maurer
123e07ca80 Revert "Only call ctx.fireChannelReadComplete() if ByteToMessageDecoder decoded at least one message."
This reverts commit d63bb4811e as this not covered correctly all cases and so could lead to missing fireChannelReadComplete() calls. We will re-evalute d63bb4811e and resbumit a pr once we are sure all is handled correctly
2017-08-18 09:06:37 +02:00
Violeta Georgieva
db4781282f Handle partially decoded elements while streaming Json array
Motivation:

'insideString' and 'openBraces' need a proper handling when streaming
Json array over multiple writes and an element decoding was started but
not completed.
Related to #6969

Modifications:

If the idx is reset:
- 'insideString' has to be reset to 'false' in order to indicate that
  array element will be decoded from the beginning
- 'openBraces' has to be reset to '1' to indicate that Json array
  decoding is in progress.

Result:
Json array is properly decoded when in streaming mode
2017-08-08 08:48:01 +02:00
Norman Maurer
d63bb4811e Only call ctx.fireChannelReadComplete() if ByteToMessageDecoder decoded at least one message.
Motivation:

Its wasteful and also confusing that channelReadComplete() is called even if there was no message forwarded to the next handler.

Modifications:

- Only call ctx.fireChannelReadComplete() if at least one message was decoded
- Add unit test

Result:

Less confusing behavior. Fixes [#4312].
2017-08-04 10:54:56 +02:00
Nikolay Fedorovskikh
6ab9c177ac Fix hash function and hash table size in Snappy
Motivation:

1. Hash function in the Snappy encoding is wrong probably: used '+' instead of '*'. See the reference implementation [1].
2. Size of the hash table is calculated, but not applied.

Modifications:

1. Fix hash function: replace addition by multiplication.
2. Allocate hash table with calculated size.
3. Use an `Integer.numberOfLeadingZeros` trick for calculate log2.
4. Release buffers in tests.

Result:

1. Better compression. In the test `encodeAndDecodeLongTextUsesCopy` now compressed size is 175 instead of 180 before this change.
2. No redundant allocations for hash table.
3. A bit faster the calc of shift (less an expensive math operations).

[1] 513df5fb5a/snappy.cc (L67)
2017-08-01 07:08:54 +02:00
Violeta Georgieva
96f52e05bf Fix #6969: Do not reset the states while streaming Json array
Motivation:

Calling JsonObjectDecoder#reset while streaming Json array over multiple
writes causes CorruptedFrameException to be thrown.

Modifications:

While streaming Json array and if the current readerIndex has been reset,
ensure that the states will not be reset.

Result:

Fixes #6969
2017-07-17 10:42:54 +02:00