Commit Graph

215 Commits

Author SHA1 Message Date
Norman Maurer
1611acf4ce
Fix CharSequenceValueConverter.convertToByte implementation for AsciiString (#7994)
Motivation:

The implementation of CharSequenceValueConverter.convertToByte did not correctly handle AsciiString if the length != 1.

Modifications:

- Only use fast-path for AsciiString with length of 1.
- Add unit tests.

Result:

Fixes https://github.com/netty/netty/issues/7990
2018-06-01 21:15:08 +02:00
Norman Maurer
1d09efeeb2
Correctly copy existing elements when CodecOutputList.add(index, element) is called. (#7939)
Motivation:

We did not correctly copy elements in some cases when add(index, element) was used.

Modifications:

- Correctly detect when copy is neede and when not.
- Add test case.

Result:

Fixes https://github.com/netty/netty/issues/7938.
2018-05-15 19:41:45 +02:00
ikurovsky
c58069f284 Better handling of streaming JSON data in JsonObjectDecoder (#7821)
Motivation:

When the JsonObjectDecoder determines that the incoming buffer had some data discarded, it resets the internal index to readerIndex and attempts to adjust the state which does not correctly work for streams of JSON objects.

Modifications:

Reset the internal index to the value considering the previous reads.

Result:

JsonObjectDecoder correctly handles streams of both JSON objects and arrays with no state adjustments or repeatable reads.
2018-04-05 07:58:14 +02:00
Norman Maurer
6e6cfa0604 Add tests for EmptyHeaders
Motivation:

6e5fd9311f fixed a bug in EmptyHeaders which was never noticed before because we had no tests.

Modifications:

Add tests for EmptyHeaders.

Result:

EmptyHeaders is tested now.
2018-03-20 15:59:08 +01:00
David Nault
f40ecc3f10 Fix Snappy decoding of large 2-byte literal lengths and copy offsets
Motivation:

The Snappy decoder was failing on valid inputs containing literals
with 2-byte lengths > 0x8000 or copies with 2-byte offsets >= 0x8000.

The decoder was also enforcing an artificially low offset limit of
0x7FFF, something the Snappy format description advises against,
and which prevents decoding valid inputs generated by other encoders.

Modifications:

Interpret 2-byte literal lengths and 2-byte copy offsets as unsigned
shorts, in accordance with the format description and reference
implementation.

Allow any positive offset value. Throw an appropriate exception
for negative values (which can theoretically occur due to arithmetic
overflow on 4-byte offsets, but are unlikely to occur in the wild).

Result:

The Snappy decoder can handle valid inputs that previously caused
it to throw exceptions.
2018-02-20 11:42:23 +01:00
Norman Maurer
756854e99a Correctly implement CharSequenceValueConvert.convertTimeMillis
Motivation:

If you pass the output of CharSequenceValueConvert.convertToTimeMillis to convertTimeMillis it will throw a ParseException.

Modifications:

- Correctly implement CharSequenceValueConverter.convertTimeMillis
- Add unit-tests for CharSequenceValueConverter

Result:

Correctly convert timemillis.
2018-02-16 07:44:13 +01:00
ryu1-sakai
c1d0d88f0a Implement DefaultHeaders.HeaderEntry.equals()
Motivation:

HeaderEntry.equals() inherets Object.equals() which simply check if two objects are the same.
So it returns false even when two HeaderEntry objects have the same name and value.

Modifications:

Implement HeaderEntry.equals() that follows the specification of Map.Entry.equals().
https://docs.oracle.com/javase/9/docs/api/java/util/Map.Entry.html#equals-java.lang.Object-

Result:

HeaderEntry.equals() returns true if two HeaderEntry objects have the same name and value.
2018-02-15 13:07:03 +01:00
Norman Maurer
ad6af3250c DefaultHeaders / CharSequenceValueConverter should treat boolean consistently.
Motivation:

HttpHeaders.getBoolean should return the same truth value for the same string value, regardless of the underlying type.

Modifications:

- Only treat values of true as Boolean.TRUE
- Add unit tests.

Result:

Consistent converting of values for all CharSequence implementations.
2018-02-15 08:37:46 +01:00
Norman Maurer
02b7507a62 Correctly handle the case when converting of value fails and return null or default value.
Motivation:

Headers.get* methods should not throw an exception but return null or the default value if converting of the value fails.

Modifications:

- Correctly handle the case when ValueConverter throws an Exception.
- Add testcase.

Result:

Fixes [#7710].
2018-02-14 08:35:55 +01:00
Abhijit Sarkar
6ff48dcbe3 Fixes #7566 by handling concatenated GZIP streams.
Motivation:
According to RFC 1952, concatenation of valid gzip streams is also a valid gzip stream. JdkZlibDecoder only processed the first and discarded the rest.

Modifications:
- Introduced a constructor argument decompressConcatenated that if true, JdkZlibDecoder would continue to process the stream.

Result:
- If 'decompressConcatenated = true', concatenated streams would be processed in
compliance to RFC 1952.
- If 'decompressConcatenated = false' (default), existing behavior would remain.
2018-01-17 06:10:56 +01:00
Idel Pivnitskiy
50a067a8f7 Make methods 'static' where it possible
Motivation:

Even if it's a super micro-optimization (most JVM could optimize such
 cases in runtime), in theory (and according to some perf tests) it
 may help a bit. It also makes a code more clear and allows you to
 access such methods in the test scope directly, without instance of
 the class.

Modifications:

Add 'static' modifier for all methods, where it possible. Mostly in
test scope.

Result:

Cleaner code with proper 'static' modifiers.
2017-10-21 14:59:26 +02:00
Idel Pivnitskiy
558097449c Add missed 'serialVersionUID' field for Serializable classes
Motivation:

Without a 'serialVersionUID' field, any change to a class will make
previously serialized versions unreadable.

Modifications:

Add missed 'serialVersionUID' field for all Serializable
classes.

Result:

Proper deserialization of previously serialized objects.
2017-10-21 14:41:18 +02:00
Jackie.Meng
80b8a91b70 Use offset finding eol avoid repeated scaning.
Motivation:

A large frame will be componsed by many packages. Every time the package
arrived, findEndOfLine will be called from the start of the buffer. It
will cause the complexity of reading frame equal to  O(n^2). This can be
eliminated by using a offset to mark the last scan position, when new
package arrived, just find the delimter from the mark. The complexity
will be O(n).

Modification:

Add a offset to mark the last scan position.

Result:

Better performance for read large frame.
2017-09-17 09:17:38 -07:00
Scott Mitchell
44bb3b6f3a DefaultHeaders value iterator
Motivation:
The Headers interface supports an interface to get all the headers values corresponding to a particular name. This API returns a List which requires intermediate storage and increases GC pressure.

Modifications:
- Add a method which returns an iterator over all the values for a specific name

Result:
Ability to iterator over values for a specific name with no intermediate collection.
2017-09-16 16:46:19 -07:00
Norman Maurer
123e07ca80 Revert "Only call ctx.fireChannelReadComplete() if ByteToMessageDecoder decoded at least one message."
This reverts commit d63bb4811e as this not covered correctly all cases and so could lead to missing fireChannelReadComplete() calls. We will re-evalute d63bb4811e and resbumit a pr once we are sure all is handled correctly
2017-08-18 09:06:37 +02:00
Violeta Georgieva
db4781282f Handle partially decoded elements while streaming Json array
Motivation:

'insideString' and 'openBraces' need a proper handling when streaming
Json array over multiple writes and an element decoding was started but
not completed.
Related to #6969

Modifications:

If the idx is reset:
- 'insideString' has to be reset to 'false' in order to indicate that
  array element will be decoded from the beginning
- 'openBraces' has to be reset to '1' to indicate that Json array
  decoding is in progress.

Result:
Json array is properly decoded when in streaming mode
2017-08-08 08:48:01 +02:00
Norman Maurer
d63bb4811e Only call ctx.fireChannelReadComplete() if ByteToMessageDecoder decoded at least one message.
Motivation:

Its wasteful and also confusing that channelReadComplete() is called even if there was no message forwarded to the next handler.

Modifications:

- Only call ctx.fireChannelReadComplete() if at least one message was decoded
- Add unit test

Result:

Less confusing behavior. Fixes [#4312].
2017-08-04 10:54:56 +02:00
Nikolay Fedorovskikh
6ab9c177ac Fix hash function and hash table size in Snappy
Motivation:

1. Hash function in the Snappy encoding is wrong probably: used '+' instead of '*'. See the reference implementation [1].
2. Size of the hash table is calculated, but not applied.

Modifications:

1. Fix hash function: replace addition by multiplication.
2. Allocate hash table with calculated size.
3. Use an `Integer.numberOfLeadingZeros` trick for calculate log2.
4. Release buffers in tests.

Result:

1. Better compression. In the test `encodeAndDecodeLongTextUsesCopy` now compressed size is 175 instead of 180 before this change.
2. No redundant allocations for hash table.
3. A bit faster the calc of shift (less an expensive math operations).

[1] 513df5fb5a/snappy.cc (L67)
2017-08-01 07:08:54 +02:00
Violeta Georgieva
96f52e05bf Fix #6969: Do not reset the states while streaming Json array
Motivation:

Calling JsonObjectDecoder#reset while streaming Json array over multiple
writes causes CorruptedFrameException to be thrown.

Modifications:

While streaming Json array and if the current readerIndex has been reset,
ensure that the states will not be reset.

Result:

Fixes #6969
2017-07-17 10:42:54 +02:00
Scott Mitchell
14ea69cdc1 NullPointerException in Lz4FrameEncoder
Motivation:
Lz4FrameEncoder maintains internal state, but the life cycle of the buffer is not consistently managed. The buffer is allocated in handlerAdded but freed in close, but the buffer can still be used until handlerRemoved is called.

Modifications:
- Move the cleanup of the buffer from close to handlerRemoved
- Explicitly throw an EncoderException from Lz4FrameEncoder if the encode operation has finished and there isn't enough space to write data

Result:
No more NPE in Lz4FrameEncoder on the buffer.
2017-06-19 14:24:09 -07:00
Scott Mitchell
ce2ce9d7a4 ByteToMessageDecoder#handlerRemoved may release cumulation buffer prematurely
Motivation:
ByteToMessageDecoder#handlerRemoved will immediately release the cumulation buffer, but it is possible that a child class may still be using this buffer, and therefore use a dereferenced buffer.

Modifications:
- ByteToMessageDecoder#handlerRemoved and ByteToMessageDecoder#decode should coordinate to avoid the case where a child class is using the cumulation buffer but ByteToMessageDecoder releases that buffer.

Result:
Child classes of ByteToMessageDecoder are less likely to reference a released buffer.
2017-05-10 11:16:26 -07:00
Andrew McCall
231e6a5b7d Calls to discardSomeReadBytes() causes the JsonDecoder to get corrupted
Modification:

Added a lastReaderIndex value and if the current readerIndex has been reset, resets the idx and the decoder.

Result:

Fixes #6156.
2017-04-27 19:34:15 +02:00
Norman Maurer
34ff9cf5f2 Fix possible overflow when calculate in the size of the out buffer in Base64
Motivation:

We not correctly guarded against overflow and so call Base64.encode(...) with a big buffer may lead to an overflow when calculate the size of the out buffer.

Modifications:

Correctly guard against overflow.

Result:

Fixes [#6620].
2017-04-21 08:11:17 +02:00
Norman Maurer
38b054c65c Correctly handle read-only ByteBuf in ByteToMessageDecoder
Motivation:

If a read-only ByteBuf is passed to the ByteToMessageDecoder.channelRead(...) method we need to make a copy of it once we try to merge buffers for cumulation. This usually is not the case but can for example happen if the local transport is used. This was the cause of the leak report we sometimes saw during the codec-http2 tests, as we are using the local transport and write a read-only buffer. This buffer will then be passed to the peer channel and fired through the pipeline and so end up as the cumulation buffer in the ByteToMessageDecoder. Once the next fragement is received we tried to merge these and failed with a ReadOnlyBufferException which then produced a leak.

Modifications:

Ensure we copy the buffer if its read-only.

Result:

No more exceptions and so leak when a read-only buffer is passed to ByteToMessageDecoder.channelRead(...)
2017-04-19 07:26:26 +02:00
Vladimir Kostyukov
4c77e7c55a netty-codec: Manage read-flow explicitly in MessageAggregator 2017-04-17 19:37:43 +02:00
Jeff Evans
476d2aea76 Adding method to assert XML decoder framing works
Motivation:

In an effort to better understand how the XmlFrameDecoder works, I consulted the tests to find a method that would reframe the inputs as per the Javadocs for that class. I couldn't find any methods that seemed to be doing it, so I wanted to add one to reinforce my understanding.

Modification:

Add a new test method to XmlFrameDecoder to assert that the reframing works as described.

Result:

New test method is added to XmlFrameDecoder
2017-03-19 08:08:07 -07:00
Norman Maurer
4f78bae2eb DatagramPacketEncoder|Decoder should take into account if wrapped handler is sharable
Motivation:

DatagramPacketEncoder|Decoder should respect if the wrapped handler is sharable or not and depending on that be sharable or not.

Modifications:

- Delegate isSharable() to wrapped handler
- Add test-cases

Result:

Correct behavior
2017-02-23 20:22:34 +01:00
Scott Mitchell
77e65fe6bb Base64 reduce byte manipulation operations
Motivation:
Base64#decode4to3 generally calculates an int value where the contents of the decodabet straddle bytes, and then uses a byte shifting or a full byte swapping operation to get the resulting contents. We can directly calculate the contents and avoid any intermediate int values and full byte swap operations. This will reduce the number of operations required during the decode operation.

Modifications:
- remove the intermediate int in the Base64#decond4to3 method.
- manually do the byte shifting since we are already doing bit/byte manipulations here anyways.

Result:
Base64#decode4to3 requires less operations to compute the end result.
2017-02-22 21:32:44 -08:00
Norman Maurer
7feb92959e Improve performance of Base64.decode and encode methods.
Motivation:

The decode and encode method uses getByte(...) and setByte(...) in loops which can be very expensive because of bounds / reference-count checking. Beside this it also slows-down a lot when paranoid leak-detection is enabled as it will track each access.

Modifications:

- Pack bytes into int / short and so reduce operations on the ByteBuf
- Use ByteProcessor to reduce getByte calls.

Result:

Better performance in general. Also when you run the build with -Pleak the handler module will build in 1/4 of the time it took before.
2017-02-22 20:12:12 +01:00
Norman Maurer
fbf0e5f4dd Prefer JDK ThreadLocalRandom implementation over ours.
Motivation:

We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7

Modification:

Using ThreadLocalRandom implementation of the JDK when possible.

Result:

Make use of JDK implementations when possible.
2017-02-16 15:44:00 -08:00
Norman Maurer
974a251de8 Not fail tests when running on JDK9+ and init of MarshallingFactory fails
Motivation:

To use jboss-marshalling extra command-line arguments are needed on JDK9+ as it makes use of reflection internally.

Modifications:

Skip jboss-marshalling tests when running on JDK9+ and init of MarshallingFactory fails.

Result:

Be able to build on latest JDK9 release.
2017-02-14 08:27:58 +01:00
Norman Maurer
9b2b3e2512 Ensure tests pass when sun.misc.Unsafe is not present
Motivation:

We need to ensure we pass all tests when sun.misc.Unsafe is not present.

Modifications:

- Make *ByteBufAllocatorTest work whenever sun.misc.Unsafe is present or not
- Let Lz4FrameEncoderTest not depend on AbstractByteBufAllocator implementation details which take into account if sun.misc.Unsafe is present or not

Result:

Tests pass even without sun.misc.Unsafe.
2017-02-14 07:52:07 +01:00
Tim Brooks
3344cd21ac Wrap operations requiring SocketPermission with doPrivileged blocks
Motivation:

Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.

This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.

Modifications:

I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.

Result:

A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
2017-01-19 21:12:52 +01:00
Jason Brown
3ea807e375 Flush LZ4FrameEncoder buffer when channel flush() is received.
Motivation:

LZ4FrameEncoder maintains an internal buffer of incoming data compress, and only writes out compressed data when a size threshold is reached. LZ4FrameEncoder does not override the flush() method, and thus the only way to flush data down the pipeline is via more data or close the channel.

Modifications:

Override the flush() function to flush on demand. Also overrode the allocateBuffer() function so we can more accurately size the output buffer (instead of needing to potatntially realloc via buffer.ensureWritable()).

Result:

Implementation works as described.
2017-01-18 10:57:21 -08:00
Johno Crawford
84410f97af Add unit test that shows LineBasedFrameDelimiter correctly handles fragmented data.
Motivation:

Verify everything works as expected.

Modifications:

Added testcase.

Result:

More test-coverage.
2017-01-12 07:50:31 +01:00
Norman Maurer
7a4b0c3297 Add unit test that shows LineBasedFrameDelimiter correctly splits line.
Motivation:

Thought there may be a bug so added a testcase to verify everything works as expected.

Modifications:

Added testcase

Result:

More test-coverage.
2017-01-11 08:00:47 +01:00
Stephane Landelle
f755e58463 Clean up following #6016
Motivation:

* DefaultHeaders from netty-codec has some duplicated logic for header date parsing
* Several classes keep on using deprecated HttpHeaderDateFormat

Modifications:

* Move HttpHeaderDateFormatter to netty-codec and rename it into HeaderDateFormatter
* Make DefaultHeaders use HeaderDateFormatter
* Replace HttpHeaderDateFormat usage with HeaderDateFormatter

Result:

Faster and more consistent code
2016-11-21 12:35:40 -08:00
Norman Maurer
0bc30a123e Eliminate usage of releaseLater(...) to reduce memory usage during tests
Motiviation:

We used ReferenceCountUtil.releaseLater(...) in our tests which simplifies a bit the releasing of ReferenceCounted objects. The problem with this is that while it simplifies stuff it increase memory usage a lot as memory may not be freed up in a timely manner.

Modifications:

- Deprecate releaseLater(...)
- Remove usage of releaseLater(...) in tests.

Result:

Less memory needed to build netty while running the tests.
2016-11-18 09:34:11 +01:00
Scott Mitchell
e7631867d3 LzmaFrameEncoderTest double release
Motivation:
2c78902ebc ensured buffers were released in the general case but didn't clean up an extra release in LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData which lead to a double release.

Modifications:
LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData should not explicitly release the buffer because decompress will release the buffer

Result:
No more reference count exception and failed test.
2016-11-16 09:55:38 -08:00
Scott Mitchell
2c78902ebc LzmaFrameEncoderTest leak due to LzmaInputStream close behavior
Motivation:
c1932a8537 made an assumption that the LzmaInputStream which wraps a ByteBufInputStream would delegate the close operation to the wrapped stream. This assumption is not true and thus we still had a leak. An issue has been logged with our LZMA dependency https://github.com/jponge/lzma-java/issues/14.

Modifications:
- Force a close on the wrapped stream

Result:
No more leak.
2016-11-15 17:07:09 -08:00
Scott Mitchell
c1932a8537 ByteBuf Input Stream Reference Count Ownership
Motivation:
Netty provides a adaptor from ByteBuf to Java's InputStream interface. The JDK Stream interfaces have an explicit lifetime because they implement the Closable interface. This lifetime may be differnt than the ByteBuf which is wrapped, and controlled by the interface which accepts the JDK Stream. However Netty's ByteBufInputStream currently does not take reference count ownership of the underlying ByteBuf. There may be no way for existing classes which only accept the InputStream interface to communicate when they are done with the stream, other than calling close(). This means that when the stream is closed it may be appropriate to release the underlying ByteBuf, as the ownership of the underlying ByteBuf resource may be transferred to the Java Stream.

Motivation:
- ByteBufInputStream.close() supports taking reference count ownership of the underyling ByteBuf

Result:
ByteBufInputStream can assume reference count ownership so the underlying ByteBuf can be cleaned up when the stream is closed.
2016-11-14 16:29:55 -08:00
Scott Mitchell
d479e939b0 Buffer Leaks in Compression Tests
Motivation:
The unit tests for the compression encoders/decoders may write buffers to an EmbeddedChannel but then may not release buffer or close the channel after the test. This may result in buffer leaks.

Modifications:
- Call channel.finishAndReleaseAll() after each test

Result:
Fixes https://github.com/netty/netty/issues/6007
2016-11-14 16:24:22 -08:00
Scott Mitchell
e47da7be77 CompatibleObjectEncoder cached ObjectOutputStream backed by release buffer bug
Motivation:
ObjectOutputStream uses a Channel Attribute to cache a ObjectOutputStream which is backed by a ByteBuf that may be released after an object is encoded and the underlying buffer is written to the channel. On subsequent encode operations the cached ObjectOutputStream will be invalid and lead to a reference count exception.

Modifications:
- CompatibleObjectEncoder should not cache a ObjectOutputStream.

Result:
CompatibleObjectEncoder doesn't use a cached object backed by a released ByteBuf.
2016-11-10 10:04:01 -08:00
radai-rosenblatt
15ac6c4a1f Clean-up unused imports
Motivation:

the build doesnt seem to enforce this, so they piled up

Modifications:

removed unused import lines

Result:

less unused imports

Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
2016-09-30 09:08:50 +02:00
Guido Medina
f0a5ee068f Update dependencies and plugins to latest possible versions.
Motivation:
It is good to have used dependencies and plugins up-to-date to fix any undiscovered bug fixed by the authors.

Modification:
Scanned dependencies and plugins and carefully updated one by one.

Result:
Dependencies and plugins are up-to-date.
2016-06-27 13:35:35 +02:00
buchgr
af7b0a04a0 Fix DefaultHeaders.toString() for keys with multiple values.
Motivation:

For example,

DefaultHttp2Headers headers = new DefaultHttp2Headers();
headers.add("key1", "value1");
headers.add("key1", "value2");
headers.add("key1", "value3");
headers.add("key2", "value4");

produces:

DefaultHttp2Headers[key1: value1key1: value2key1: value3, key2: value4]

while correctly it should be

DefaultHttp2Headers[key1: value1, key1: value2, key1: value3, key2: value4]

Modifications:

Change the toString() method to produce the beforementioned output.

Result:

toString() format is correct also for keys with multiple values.
2016-06-02 17:24:51 +02:00
Norman Maurer
7b25402e80 Add CompositeByteBuf.addComponent(boolean ...) method to simplify usage
Motivation:

At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.

Modifications:

Add new methods that autoamtically increase the writerIndex when buffers are added.

Result:

Easier usage of CompositeByteBuf.
2016-05-21 19:52:16 +02:00
Trustin Lee
3a9f472161 Make retained derived buffers recyclable
Related: #4333 #4421 #5128

Motivation:

slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.

Modifications:

- Add the following methods which creates a non-recyclable derived buffer
  - retainedSlice()
  - retainedDuplicate()
  - readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
  own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
  - a user can replace the content of the holder in a consistent way
  - copy/duplicate/retainedDuplicate() can delegate the holder
    construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
  - Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
  - Make ReplayingDecoderByteBuf.reject() return an exception instead of
    throwing it so that its callers don't need to add dummy return
    statement

Result:

Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
2016-05-17 11:16:13 +02:00
Xiaoyan Lin
ce1ae0eb8b Handle the backslash with double quote in JsonObjectDecoder
Motivation:

The double quote may be escaped in a JSON string, but JsonObjectDecoder doesn't handle it. Resolves #5157.

Modifications:

Don't end a JSON string when processing an escaped double quote.

Result:

JsonObjectDecoder can handle backslash and double quote in a JSON string correctly.
2016-05-04 14:04:39 +02:00
Norman Maurer
6108b7297b Correctly handle ChannelInputShutdownEvent in ReplayingDecoder
Motivation:

b112673554 added ChannelInputShutdownEvent support to ByteToMessageDecoder but missed updating the code for ReplayingDecoder. This has the effect:

- If a ChannelInputShutdownEvent is fired ByteToMessageDecoder (the super-class of ReplayingDecoder) will call the channelInputClosed(...) method which will pass the incorrect buffer to the decode method of ReplayingDecoder.

Modifications:

Share more code between ByteToMessageDEcoder and ReplayingDecoder and so also support ChannelInputShutdownEvent correctly in ReplayingDecoder

Result:

ChannelInputShutdownEvent is corrrectly handle in ReplayingDecoder as well.
2016-04-14 10:20:58 +02:00