Commit Graph

245 Commits

Author SHA1 Message Date
Scott Mitchell
ce2ce9d7a4 ByteToMessageDecoder#handlerRemoved may release cumulation buffer prematurely
Motivation:
ByteToMessageDecoder#handlerRemoved will immediately release the cumulation buffer, but it is possible that a child class may still be using this buffer, and therefore use a dereferenced buffer.

Modifications:
- ByteToMessageDecoder#handlerRemoved and ByteToMessageDecoder#decode should coordinate to avoid the case where a child class is using the cumulation buffer but ByteToMessageDecoder releases that buffer.

Result:
Child classes of ByteToMessageDecoder are less likely to reference a released buffer.
2017-05-10 11:16:26 -07:00
Andrew McCall
231e6a5b7d Calls to discardSomeReadBytes() causes the JsonDecoder to get corrupted
Modification:

Added a lastReaderIndex value and if the current readerIndex has been reset, resets the idx and the decoder.

Result:

Fixes #6156.
2017-04-27 19:34:15 +02:00
Norman Maurer
34ff9cf5f2 Fix possible overflow when calculate in the size of the out buffer in Base64
Motivation:

We not correctly guarded against overflow and so call Base64.encode(...) with a big buffer may lead to an overflow when calculate the size of the out buffer.

Modifications:

Correctly guard against overflow.

Result:

Fixes [#6620].
2017-04-21 08:11:17 +02:00
Norman Maurer
38b054c65c Correctly handle read-only ByteBuf in ByteToMessageDecoder
Motivation:

If a read-only ByteBuf is passed to the ByteToMessageDecoder.channelRead(...) method we need to make a copy of it once we try to merge buffers for cumulation. This usually is not the case but can for example happen if the local transport is used. This was the cause of the leak report we sometimes saw during the codec-http2 tests, as we are using the local transport and write a read-only buffer. This buffer will then be passed to the peer channel and fired through the pipeline and so end up as the cumulation buffer in the ByteToMessageDecoder. Once the next fragement is received we tried to merge these and failed with a ReadOnlyBufferException which then produced a leak.

Modifications:

Ensure we copy the buffer if its read-only.

Result:

No more exceptions and so leak when a read-only buffer is passed to ByteToMessageDecoder.channelRead(...)
2017-04-19 07:26:26 +02:00
Vladimir Kostyukov
4c77e7c55a netty-codec: Manage read-flow explicitly in MessageAggregator 2017-04-17 19:37:43 +02:00
Jeff Evans
476d2aea76 Adding method to assert XML decoder framing works
Motivation:

In an effort to better understand how the XmlFrameDecoder works, I consulted the tests to find a method that would reframe the inputs as per the Javadocs for that class. I couldn't find any methods that seemed to be doing it, so I wanted to add one to reinforce my understanding.

Modification:

Add a new test method to XmlFrameDecoder to assert that the reframing works as described.

Result:

New test method is added to XmlFrameDecoder
2017-03-19 08:08:07 -07:00
Norman Maurer
4f78bae2eb DatagramPacketEncoder|Decoder should take into account if wrapped handler is sharable
Motivation:

DatagramPacketEncoder|Decoder should respect if the wrapped handler is sharable or not and depending on that be sharable or not.

Modifications:

- Delegate isSharable() to wrapped handler
- Add test-cases

Result:

Correct behavior
2017-02-23 20:22:34 +01:00
Scott Mitchell
77e65fe6bb Base64 reduce byte manipulation operations
Motivation:
Base64#decode4to3 generally calculates an int value where the contents of the decodabet straddle bytes, and then uses a byte shifting or a full byte swapping operation to get the resulting contents. We can directly calculate the contents and avoid any intermediate int values and full byte swap operations. This will reduce the number of operations required during the decode operation.

Modifications:
- remove the intermediate int in the Base64#decond4to3 method.
- manually do the byte shifting since we are already doing bit/byte manipulations here anyways.

Result:
Base64#decode4to3 requires less operations to compute the end result.
2017-02-22 21:32:44 -08:00
Norman Maurer
7feb92959e Improve performance of Base64.decode and encode methods.
Motivation:

The decode and encode method uses getByte(...) and setByte(...) in loops which can be very expensive because of bounds / reference-count checking. Beside this it also slows-down a lot when paranoid leak-detection is enabled as it will track each access.

Modifications:

- Pack bytes into int / short and so reduce operations on the ByteBuf
- Use ByteProcessor to reduce getByte calls.

Result:

Better performance in general. Also when you run the build with -Pleak the handler module will build in 1/4 of the time it took before.
2017-02-22 20:12:12 +01:00
Norman Maurer
fbf0e5f4dd Prefer JDK ThreadLocalRandom implementation over ours.
Motivation:

We have our own ThreadLocalRandom implementation to support older JDKs . That said we should prefer the JDK provided when running on JDK >= 7

Modification:

Using ThreadLocalRandom implementation of the JDK when possible.

Result:

Make use of JDK implementations when possible.
2017-02-16 15:44:00 -08:00
Norman Maurer
974a251de8 Not fail tests when running on JDK9+ and init of MarshallingFactory fails
Motivation:

To use jboss-marshalling extra command-line arguments are needed on JDK9+ as it makes use of reflection internally.

Modifications:

Skip jboss-marshalling tests when running on JDK9+ and init of MarshallingFactory fails.

Result:

Be able to build on latest JDK9 release.
2017-02-14 08:27:58 +01:00
Norman Maurer
9b2b3e2512 Ensure tests pass when sun.misc.Unsafe is not present
Motivation:

We need to ensure we pass all tests when sun.misc.Unsafe is not present.

Modifications:

- Make *ByteBufAllocatorTest work whenever sun.misc.Unsafe is present or not
- Let Lz4FrameEncoderTest not depend on AbstractByteBufAllocator implementation details which take into account if sun.misc.Unsafe is present or not

Result:

Tests pass even without sun.misc.Unsafe.
2017-02-14 07:52:07 +01:00
Tim Brooks
3344cd21ac Wrap operations requiring SocketPermission with doPrivileged blocks
Motivation:

Currently Netty does not wrap socket connect, bind, or accept
operations in doPrivileged blocks. Nor does it wrap cases where a dns
lookup might happen.

This prevents an application utilizing the SecurityManager from
isolating SocketPermissions to Netty.

Modifications:

I have introduced a class (SocketUtils) that wraps operations
requiring SocketPermissions in doPrivileged blocks.

Result:

A user of Netty can grant SocketPermissions explicitly to the Netty
jar, without granting it to the rest of their application.
2017-01-19 21:12:52 +01:00
Jason Brown
3ea807e375 Flush LZ4FrameEncoder buffer when channel flush() is received.
Motivation:

LZ4FrameEncoder maintains an internal buffer of incoming data compress, and only writes out compressed data when a size threshold is reached. LZ4FrameEncoder does not override the flush() method, and thus the only way to flush data down the pipeline is via more data or close the channel.

Modifications:

Override the flush() function to flush on demand. Also overrode the allocateBuffer() function so we can more accurately size the output buffer (instead of needing to potatntially realloc via buffer.ensureWritable()).

Result:

Implementation works as described.
2017-01-18 10:57:21 -08:00
Johno Crawford
84410f97af Add unit test that shows LineBasedFrameDelimiter correctly handles fragmented data.
Motivation:

Verify everything works as expected.

Modifications:

Added testcase.

Result:

More test-coverage.
2017-01-12 07:50:31 +01:00
Norman Maurer
7a4b0c3297 Add unit test that shows LineBasedFrameDelimiter correctly splits line.
Motivation:

Thought there may be a bug so added a testcase to verify everything works as expected.

Modifications:

Added testcase

Result:

More test-coverage.
2017-01-11 08:00:47 +01:00
Stephane Landelle
f755e58463 Clean up following #6016
Motivation:

* DefaultHeaders from netty-codec has some duplicated logic for header date parsing
* Several classes keep on using deprecated HttpHeaderDateFormat

Modifications:

* Move HttpHeaderDateFormatter to netty-codec and rename it into HeaderDateFormatter
* Make DefaultHeaders use HeaderDateFormatter
* Replace HttpHeaderDateFormat usage with HeaderDateFormatter

Result:

Faster and more consistent code
2016-11-21 12:35:40 -08:00
Norman Maurer
0bc30a123e Eliminate usage of releaseLater(...) to reduce memory usage during tests
Motiviation:

We used ReferenceCountUtil.releaseLater(...) in our tests which simplifies a bit the releasing of ReferenceCounted objects. The problem with this is that while it simplifies stuff it increase memory usage a lot as memory may not be freed up in a timely manner.

Modifications:

- Deprecate releaseLater(...)
- Remove usage of releaseLater(...) in tests.

Result:

Less memory needed to build netty while running the tests.
2016-11-18 09:34:11 +01:00
Scott Mitchell
e7631867d3 LzmaFrameEncoderTest double release
Motivation:
2c78902ebc ensured buffers were released in the general case but didn't clean up an extra release in LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData which lead to a double release.

Modifications:
LzmaFrameEncoderTest#testCompressionOfBatchedFlowOfData should not explicitly release the buffer because decompress will release the buffer

Result:
No more reference count exception and failed test.
2016-11-16 09:55:38 -08:00
Scott Mitchell
2c78902ebc LzmaFrameEncoderTest leak due to LzmaInputStream close behavior
Motivation:
c1932a8537 made an assumption that the LzmaInputStream which wraps a ByteBufInputStream would delegate the close operation to the wrapped stream. This assumption is not true and thus we still had a leak. An issue has been logged with our LZMA dependency https://github.com/jponge/lzma-java/issues/14.

Modifications:
- Force a close on the wrapped stream

Result:
No more leak.
2016-11-15 17:07:09 -08:00
Scott Mitchell
c1932a8537 ByteBuf Input Stream Reference Count Ownership
Motivation:
Netty provides a adaptor from ByteBuf to Java's InputStream interface. The JDK Stream interfaces have an explicit lifetime because they implement the Closable interface. This lifetime may be differnt than the ByteBuf which is wrapped, and controlled by the interface which accepts the JDK Stream. However Netty's ByteBufInputStream currently does not take reference count ownership of the underlying ByteBuf. There may be no way for existing classes which only accept the InputStream interface to communicate when they are done with the stream, other than calling close(). This means that when the stream is closed it may be appropriate to release the underlying ByteBuf, as the ownership of the underlying ByteBuf resource may be transferred to the Java Stream.

Motivation:
- ByteBufInputStream.close() supports taking reference count ownership of the underyling ByteBuf

Result:
ByteBufInputStream can assume reference count ownership so the underlying ByteBuf can be cleaned up when the stream is closed.
2016-11-14 16:29:55 -08:00
Scott Mitchell
d479e939b0 Buffer Leaks in Compression Tests
Motivation:
The unit tests for the compression encoders/decoders may write buffers to an EmbeddedChannel but then may not release buffer or close the channel after the test. This may result in buffer leaks.

Modifications:
- Call channel.finishAndReleaseAll() after each test

Result:
Fixes https://github.com/netty/netty/issues/6007
2016-11-14 16:24:22 -08:00
Scott Mitchell
e47da7be77 CompatibleObjectEncoder cached ObjectOutputStream backed by release buffer bug
Motivation:
ObjectOutputStream uses a Channel Attribute to cache a ObjectOutputStream which is backed by a ByteBuf that may be released after an object is encoded and the underlying buffer is written to the channel. On subsequent encode operations the cached ObjectOutputStream will be invalid and lead to a reference count exception.

Modifications:
- CompatibleObjectEncoder should not cache a ObjectOutputStream.

Result:
CompatibleObjectEncoder doesn't use a cached object backed by a released ByteBuf.
2016-11-10 10:04:01 -08:00
radai-rosenblatt
15ac6c4a1f Clean-up unused imports
Motivation:

the build doesnt seem to enforce this, so they piled up

Modifications:

removed unused import lines

Result:

less unused imports

Signed-off-by: radai-rosenblatt <radai.rosenblatt@gmail.com>
2016-09-30 09:08:50 +02:00
Guido Medina
f0a5ee068f Update dependencies and plugins to latest possible versions.
Motivation:
It is good to have used dependencies and plugins up-to-date to fix any undiscovered bug fixed by the authors.

Modification:
Scanned dependencies and plugins and carefully updated one by one.

Result:
Dependencies and plugins are up-to-date.
2016-06-27 13:35:35 +02:00
buchgr
af7b0a04a0 Fix DefaultHeaders.toString() for keys with multiple values.
Motivation:

For example,

DefaultHttp2Headers headers = new DefaultHttp2Headers();
headers.add("key1", "value1");
headers.add("key1", "value2");
headers.add("key1", "value3");
headers.add("key2", "value4");

produces:

DefaultHttp2Headers[key1: value1key1: value2key1: value3, key2: value4]

while correctly it should be

DefaultHttp2Headers[key1: value1, key1: value2, key1: value3, key2: value4]

Modifications:

Change the toString() method to produce the beforementioned output.

Result:

toString() format is correct also for keys with multiple values.
2016-06-02 17:24:51 +02:00
Norman Maurer
7b25402e80 Add CompositeByteBuf.addComponent(boolean ...) method to simplify usage
Motivation:

At the moment the user is responsible to increase the writer index of the composite buffer when a new component is added. We should add some methods that handle this for the user as this is the most popular usage of the composite buffer.

Modifications:

Add new methods that autoamtically increase the writerIndex when buffers are added.

Result:

Easier usage of CompositeByteBuf.
2016-05-21 19:52:16 +02:00
Trustin Lee
3a9f472161 Make retained derived buffers recyclable
Related: #4333 #4421 #5128

Motivation:

slice(), duplicate() and readSlice() currently create a non-recyclable
derived buffer instance. Under heavy load, an application that creates a
lot of derived buffers can put the garbage collector under pressure.

Modifications:

- Add the following methods which creates a non-recyclable derived buffer
  - retainedSlice()
  - retainedDuplicate()
  - readRetainedSlice()
- Add the new recyclable derived buffer implementations, which has its
  own reference count value
- Add ByteBufHolder.retainedDuplicate()
- Add ByteBufHolder.replace(ByteBuf) so that..
  - a user can replace the content of the holder in a consistent way
  - copy/duplicate/retainedDuplicate() can delegate the holder
    construction to replace(ByteBuf)
- Use retainedDuplicate() and retainedSlice() wherever possible
- Miscellaneous:
  - Rename DuplicateByteBufTest to DuplicatedByteBufTest (missing 'D')
  - Make ReplayingDecoderByteBuf.reject() return an exception instead of
    throwing it so that its callers don't need to add dummy return
    statement

Result:

Derived buffers are now recycled when created via retainedSlice() and
retainedDuplicate() and derived from a pooled buffer
2016-05-17 11:16:13 +02:00
Xiaoyan Lin
ce1ae0eb8b Handle the backslash with double quote in JsonObjectDecoder
Motivation:

The double quote may be escaped in a JSON string, but JsonObjectDecoder doesn't handle it. Resolves #5157.

Modifications:

Don't end a JSON string when processing an escaped double quote.

Result:

JsonObjectDecoder can handle backslash and double quote in a JSON string correctly.
2016-05-04 14:04:39 +02:00
Norman Maurer
6108b7297b Correctly handle ChannelInputShutdownEvent in ReplayingDecoder
Motivation:

b112673554 added ChannelInputShutdownEvent support to ByteToMessageDecoder but missed updating the code for ReplayingDecoder. This has the effect:

- If a ChannelInputShutdownEvent is fired ByteToMessageDecoder (the super-class of ReplayingDecoder) will call the channelInputClosed(...) method which will pass the incorrect buffer to the decode method of ReplayingDecoder.

Modifications:

Share more code between ByteToMessageDEcoder and ReplayingDecoder and so also support ChannelInputShutdownEvent correctly in ReplayingDecoder

Result:

ChannelInputShutdownEvent is corrrectly handle in ReplayingDecoder as well.
2016-04-14 10:20:58 +02:00
Norman Maurer
7c734fcf73 Fix resource leak in tests introduced by 69070c37ba. 2016-04-14 10:13:55 +02:00
Xiaoyan Lin
01835fdf18 Add LineEncoder to append a line separator automatically
Motivation:

See #1811

Modifications:

Add LineEncoder and LineSeparator

Result:

The user can use LineEncoder to write a String with a line separator automatically
2016-03-16 20:31:01 +01:00
Xiaoyan Lin
4fb585965c Add DatagramPacketEncoder and DatagramPacketDecoder
Motivation:

UDP-oriented codec reusing the existing encoders and decoders would be helpful. See #1350

Modifications:

Add DatagramPacketEncoder and DatagramPacketDecoder to reuse the existing encoders and decoders.

Result:

People can use DatagramPacketEncoder and DatagramPacketDecoder to wrap existing encoders and decoders to create UDP-oriented codec.
2016-03-14 12:14:57 +01:00
Norman Maurer
9aac6dac2e [#4386] ByteToMessage.decodeLast(...) should not call decode(...) if buffer is empty.
Motivation:

If the input buffer is empty we should not have decodeLast(...) call decode(...) as the user may not expect this.

Modifications:

- Not call decode(...) in decodeLast(...) if the input buffer is empty.
- Add testcases.

Result:

decodeLast(...) will not call decode(...) if input buffer is empty.
2016-03-01 08:42:26 +01:00
Norman Maurer
65b3470456 [#4793] Correctly add newlines when encode base64
Motivation:

We not correctly added newlines if the src data needed to be padded. This regression was introduced by '63426fc3ed083513c07a58b45381f5c10dd47061'

Modifications:

- Correctly handling newlines
- Add unit test that proves the fix.

Result:

No more invalid base64 encoded data.
2016-02-06 09:56:21 +01:00
Xiaoyan Lin
d1ef33b8f4 Change 64 to 63 in Snappy.decodeLiteral
Motivation:

According to https://github.com/google/snappy/blob/master/format_description.txt#L55 , Snappy.decodeLiteral should handle the cases of 60, 61, 62 and 63. However right now it processes 64 instead of 63. I believe it's a typo since `tag >> 2 & 0x3F` must be less than 64.

Modifications:

Use the correct value 63.

Result:

Snappy.decodeLiteral handles the correct case.
2016-01-21 09:59:10 +01:00
Robert Borg
3785ca9311 added support for Protobuf codec nano runtime
Motivation:

Netty was missing support for Protobuf nano runtime targeted at
weaker systems such as Android devices.

Modifications:

Added ProtobufDecoderNano and ProtobufDecoderNano
in order to provide support for Nano runtime.

modified ProtobufVarint32FrameDecoder and
ProtobufLengthFieldPrepender in order to remove any
on either Nano or Lite runtime by copying the code
for handling Protobuf varint32 in from Protobuf
library.

modified Licenses and NOTICE in order to reflect the
changes i made.

added Protobuf Nano runtime as optional dependency

Result:

Netty now supports Protobuf Nano runtime.
2016-01-19 21:39:17 +01:00
Norman Maurer
4cdbe39284 [#4635] Stop decoding if decoder was removed
Motivation:

We need to check if this handler was removed before continuing with decoding.
If it was removed, it is not safe to continue to operate on the buffer.

Modifications:

Check if decoder was removed after fire messages through the pipeline.

Result:

No illegal buffer access when decoder was removed.
2016-01-05 11:01:24 +01:00
Norman Maurer
63426fc3ed Prevent adding newline if Base64 buffer encoded ends directly on MAX_LINE_LENGTH
Motivation:

We need to ensure we not add a newline if the Base64 encoded buffer ends directly on the MAX_LINE_LENGTH. If we miss to do so this produce invalid data.
Because of this bug OpenSslServerContext and OpenSslClientContext may fail to load a cert.

Modifications:

- Only add NEW_LINE if we not are on the end of the dst buffer.
- Add unit test

Result:

Correct result in all cases
2015-12-18 20:57:59 +01:00
Louis Ryan
6e108cb96a Improve the performance of copying header sets when hashing and name validation are equivalent.
Motivation:
Headers and groups of headers are frequently copied and the current mechanism is slower than it needs to be.

Modifications:
Skip name validation and hash computation when they are not necessary.
Fix emergent bug in CombinedHttpHeaders identified with better testing
Fix memory leak in DefaultHttp2Headers when clearing
Added benchmarks

Result:
Faster header copying and some collateral bug fixes
2015-11-07 08:53:10 -08:00
Louis Ryan
3eb65797ed Make headers.set(self) a no-op instead of throwing. Makes it consistent with setAll
Motivation:

Makes the API contract of headers more consistent and simpler.

Modifications:

If self is passed to set then simply return

Result:

set and setAll will be consistent
2015-11-06 07:00:54 -08:00
Scott Mitchell
19658e9cd8 HTTP/2 Headers Type Updates
Motivation:
The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-8.1.2) indicates that header names consist of ASCII characters. We currently use ByteString to represent HTTP/2 header names. The HTTP/2 RFC (https://tools.ietf.org/html/rfc7540#section-10.3) also eludes to header values inheriting the same validity characteristics as HTTP/1.x. Using AsciiString for the value type of HTTP/2 headers would allow for re-use of predefined HTTP/1.x values, and make comparisons more intuitive. The Headers<T> interface could also be expanded to allow for easier use of header types which do not have the same Key and Value type.

Motivation:
- Change Headers<T> to Headers<K, V>
- Change Http2Headers<ByteString> to Http2Headers<CharSequence, CharSequence>
- Remove ByteString. Having AsciiString extend ByteString complicates equality comparisons when the hash code algorithm is no longer shared.

Result:
Http2Header types are more representative of the HTTP/2 RFC, and relationship between HTTP/2 header name/values more directly relates to HTTP/1.x header names/values.
2015-10-30 15:29:44 -07:00
Norman Maurer
ca44436ce6 [#4265] Not allow to add/set DefaultHttpHeaders to itself.
Motivation:

We should prevent to add/set DefaultHttpHeaders to itself to prevent unexpected side-effects.

Modifications:

Throw IllegalArgumentException if user tries to pass the same instance to set/add.

Result:

No surprising side-effects.
2015-09-30 08:57:53 +02:00
Norman Maurer
1fefe9affb [#4087] Correctly forward bytes when remove codec and handle channelInactive / channelReadComplete(...)
Motivation:

We missed to correctly implement the handlerRemoved(...) / channelInactive(...) and channelReadComplete(...) method, this leaded to multiple problems:

 - Missed to forward bytes when the codec is removed from the pipeline
 - Missed to call decodeLast(...) once the Channel goes in active
 - No correct handling of channelReadComplete that could lead to grow of cumulation buffer.

Modifications:

- Correctly implement methods and forward to the internal ByteToMessageDecoder
- Add unit test.

Result:

Correct behaviour
2015-08-21 18:26:32 +02:00
Scott Mitchell
ba6ce5449e Headers Performance Boost and Interface Simplification
Motivation:
A degradation in performance has been observed from the 4.0 branch as documented in https://github.com/netty/netty/issues/3962.

Modifications:
- Simplify Headers class hierarchy.
- Restore the DefaultHeaders to be based upon DefaultHttpHeaders from 4.0.
- Make various other modifications that are causing hot spots.

Result:
Performance is now on par with 4.0.
2015-08-17 08:50:11 -07:00
Jakob Buchgraber
6fd0a0c55f Faster and more memory efficient headers for HTTP, HTTP/2, STOMP and SPYD. Fixes #3600
Motivation:

We noticed that the headers implementation in Netty for HTTP/2 uses quite a lot of memory
and that also at least the performance of randomly accessing a header is quite poor. The main
concern however was memory usage, as profiling has shown that a DefaultHttp2Headers
not only use a lot of memory it also wastes a lot due to the underlying hashmaps having
to be resized potentially several times as new headers are being inserted.

This is tracked as issue #3600.

Modifications:
We redesigned the DefaultHeaders to simply take a Map object in its constructor and
reimplemented the class using only the Map primitives. That way the implementation
is very concise and hopefully easy to understand and it allows each concrete headers
implementation to provide its own map or to even use a different headers implementation
for processing requests and writing responses i.e. incoming headers need to provide
fast random access while outgoing headers need fast insertion and fast iteration. The
new implementation can support this with hardly any code changes. It also comes
with the advantage that if the Netty project decides to add a third party collections library
as a dependency, one can simply plug in one of those very fast and memory efficient map
implementations and get faster and smaller headers for free.

For now, we are using the JDK's TreeMap for HTTP and HTTP/2 default headers.

Result:

- Significantly fewer lines of code in the implementation. While the total commit is still
  roughly 400 lines less, the actual implementation is a lot less. I just added some more
  tests and microbenchmarks.

- Overall performance is up. The current implementation should be significantly faster
  for insertion and retrieval. However, it is slower when it comes to iteration. There is simply
  no way a TreeMap can have the same iteration performance as a linked list (as used in the
  current headers implementation). That's totally fine though, because when looking at the
  benchmark results @ejona86 pointed out that the performance of the headers is completely
  dominated by insertion, that is insertion is so significantly faster in the new implementation
  that it does make up for several times the iteration speed. You can't iterate what you haven't
  inserted. I am demonstrating that in this spreadsheet [1]. (Actually, iteration performance is
  only down for HTTP, it's significantly improved for HTTP/2).

- Memory is down. The implementation with TreeMap uses on avg ~30% less memory. It also does not
  produce any garbage while being resized. In load tests for GRPC we have seen a memory reduction
  of up to 1.2KB per RPC. I summarized the memory improvements in this spreadsheet [1]. The data
  was generated by [2] using JOL.

- While it was my original intend to only improve the memory usage for HTTP/2, it should be similarly
  improved for HTTP, SPDY and STOMP as they all share a common implementation.

[1] https://docs.google.com/spreadsheets/d/1ck3RQklyzEcCLlyJoqDXPCWRGVUuS-ArZf0etSXLVDQ/edit#gid=0
[2] https://gist.github.com/buchgr/4458a8bdb51dd58c82b4
2015-08-04 17:12:24 -07:00
tczerwinski
b33c7b12a4 XmlFrameDecoder is corrupt
Motivation:

Two problems:
1. Decoder assumption that as soon as it finds </ element it can decrement opened xml brackets counter. It can lead to bugs when closing bracket is not in byteBuf yet.
2. Not proper handling of more than two root elements in XML document. First element will be processed properly, second one not. It is caused by assumption that byteBuf readerIndex is 0 at the begging of decoding.

Modifications:

Both problems were resolved by fixes:
1. decrement opened brackets count only if </ > enclosing bracket is found
2. consider readerIndex higher than 0 when counting output frame length

Result:

Both problems were resolved
2015-07-29 18:49:26 +02:00
Norman Maurer
ccde870b38 Revert "Ensure channelReadComplete() is called only when necessary"
This reverts commit 27a25e29f7.
2015-04-20 09:10:41 +02:00
Scott Mitchell
9a7a85dbe5 ByteString introduced as AsciiString super class
Motivation:
The usage and code within AsciiString has exceeded the original design scope for this class. Its usage as a binary string is confusing and on the verge of violating interface assumptions in some spots.

Modifications:
- ByteString will be created as a base class to AsciiString. All of the generic byte handling processing will live in ByteString and all the special character encoding will live in AsciiString.

Results:
The AsciiString interface will be clarified. Users of AsciiString can now be clear of the limitations the class imposes while users of the ByteString class don't have to live with those limitations.
2015-04-14 16:35:17 -07:00
Norman Maurer
aa1e537de4 [#3373] Rename class to match naming scheme
Motivation:

The ReplayingDecoderBuffer does not match the naming scheme we use for ByteBuf types.

Modifications:

Rename to ReplayingDecoderByteBuf to match naming scheme

Result:

Consistent naming
2015-04-12 13:32:41 +02:00
Idel Pivnitskiy
c9adb41636 Refactor tests for compression codecs
Motivation:

Too many duplicated code of tests for different compression codecs.

Modifications:

- Added abstract classes AbstractCompressionTest, AbstractDecoderTest and AbstractEncoderTest which contains common variables and tests for any compression codec.
- Removed common tests which are implemented in AbstractDecoderTest and AbstractEncoderTest from current tests for compression codecs.
- Implemented abstract methods of AbstractDecoderTest and AbstractEncoderTest in current tests for compression codecs.
- Added additional checks for current tests.
- Renamed abstract class IntegrationTest to AbstractIntegrationTest.
- Used Theories to run tests with head and direct buffers.
- Removed code duplicates.

Result:

Removed duplicated code of tests for compression codecs and simplified an addition of tests for new compression codecs.
2015-04-10 15:50:41 +02:00
Robert.Panzer
18443efeab Add support for byte order to LengthFieldPrepender
Motivation:

While the LengthFieldBasedFrameDecoder supports a byte order the LengthFieldPrepender does not.
That means that I can simply add a LengthFieldBasedFrameDecoder with ByteOrder.LITTLE_ENDIAN to my pipeline
but have to write my own Encoder to write length fields in little endian byte order.

Modifications:

Added a constructor that takes a byte order and all other parameters.
All other constructors delegate to this one with ByteOrder.BIG_ENDIAN.
LengthFieldPrepender.encode() uses this byte order to write the length field.

Result:

LengthFieldPrepender will write the length field in the defined byte order.
2015-03-13 18:50:20 +01:00
Daniel Bevenius
c53b8d5a85 Suggestion for supporting single header fields.
Motivation:
At the moment if you want to return a HTTP header containing multiple
values you have to set/add that header once with the values wanted. If
you used set/add with an array/iterable multiple HTTP header fields will
be returned in the response.

Note, that this is indeed a suggestion and additional work and tests
should be added. This is mainly to bring up a discussion.

Modifications:
Added a flag to specify that when multiple values exist for a single
HTTP header then add them as a comma separated string.
In addition added a method to StringUtil to help escape comma separated
value charsequences.

Result:
Allows for responses to be smaller.
2015-02-18 10:54:15 +01:00
Trustin Lee
27a25e29f7 Ensure channelReadComplete() is called only when necessary
Motivation:

Even if a handler called ctx.fireChannelReadComplete(), the next handler
should not get its channelReadComplete() invoked if fireChannelRead()
was not invoked before.

Modifications:

- Ensure channelReadComplete() is invoked only when the handler of the
  current context actually produced a message, because otherwise there's
  no point of triggering channelReadComplete().
  i.e. channelReadComplete() must follow channelRead().
- Fix a bug where ctx.read() was not called if the handler of the
  current context did not produce any message, making the connection
  stall. Read the new comment for more information.

Result:

- channelReadComplete() is invoked only when it makes sense.
- No stale connection
2015-02-07 16:13:56 +09:00
Leonardo Freitas Gomes
9ee75126eb Motivation: Sonar points out an equals comparison, where the types compared are different and don't share any common parent http://clinker.netty.io/sonar/drilldown/issues/io.netty:netty-parent:master?severity=CRITICAL#
Modifications:
Converted AsciiString into a String by calling toString() method before comparing with equals(). Also added a unit-test to show that it works.

Result:
Major violation is gone. Code is correct.
2014-12-16 07:17:31 +01:00
Trustin Lee
1765429335 Revert bad renaming in ZlibTest 2014-11-19 18:36:23 +09:00
Trustin Lee
0795ee6130 Add more test cases to ZlibTest
Motivation:

Currently, we only test our ZlibEncoders against our ZlibDecoders. It is
convenient to write such tests, but it does not necessarily guarantee
their correctness. For example, both encoder and decoder might be faulty
even if the tests pass.

Modifications:

Add another test that makes sure that our GZIP encoder generates the
GZIP trailer, using the fact that GZIPInputStream raises an EOFException
when GZIP trailer is missing.

Result:

More coverage for GZIP compression
2014-11-19 18:15:56 +09:00
Scott Mitchell
50e06442c3 Backport header improvements from 5.0
Motivation:
The header class hierarchy and algorithm was improved on the master branch for versions 5.x. These improvments should be backported to the 4.1 baseline.

Modifications:
- cherry-pick the following commits from the master branch: 2374e17, 36b4157, 222d258

Result:
Header improvements in master branch are available in 4.1 branch.
2014-11-01 00:59:57 +09:00
Idel Pivnitskiy
5f94d7a319 Refactor LzfDecoder to use proper state machine
Motivation:

Make it much more readable code.

Modifications:

- Added states of decompression.
- Refactored decode(...) method to use this states.

Result:

Much more readable decoder which looks like other compression decoders.
2014-10-20 13:59:54 +02:00
Idel Pivnitskiy
cf5aea52ed Implemented LZMA frame encoder
Motivation:

LZMA compression algorithm has a very good compression ratio.

Modifications:

- Added `lzma-java` library which implements LZMA algorithm.
- Implemented LzmaFrameEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the LzmaFrameEncoder and how it can compress data for the next uncompression using the original library.

Result:

LZMA encoder which can compress data using LZMA algorithm.
2014-09-15 15:05:36 +02:00
Norman Maurer
fbf8533759 [#2812] Ensure we call checkForSharableAnnotation in all constructors of ByteToMessageCodec
Motivation:

ByteToMessageCodec miss to check for @Sharable annotation in one of its constructors.

Modifications:

Ensure we call checkForSharableAnnotation in all constructors.

Result:

After your change, what will change.
2014-08-23 21:03:25 +02:00
Trustin Lee
1971bd1da6 Rename SnappyFramedEncoder/Decoder to SnappyFrameEncoder/Decoder
Related issue: #2766

Motivation:

Forgot to rename them before the final release by mistake.

Modifications:

Rename and then re-introduce the deprecated version that extends the
renamed class.

Result:

Better naming
2014-08-14 15:17:10 -07:00
Idel Pivnitskiy
c8841bc9de Implemented LZ4 compression codec
Motivation:

LZ4 compression codec provides sending and receiving data encoded by very fast LZ4 algorithm.

Modifications:

- Added `lz4` library which implements LZ4 algorithm.
- Implemented Lz4FramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Added tests to verify the Lz4FramedEncoder and how it can compress data for the next uncompression using the original library.
- Implemented Lz4FramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added tests to verify the Lz4FramedDecoder and how it can uncompress data after compression using the original library.
- Added integration tests for Lz4FramedEncoder/Decoder.

Result:

Full LZ4 compression codec which can compress/uncompress data using LZ4 algorithm.
2014-08-14 15:05:24 -07:00
Trustin Lee
f311012455 Rename FastLzFramed* to FastLzFrame* 2014-08-13 22:55:34 -07:00
Idel Pivnitskiy
a0c466a276 Implemented FastLZ compression codec
Motivation:

FastLZ compression codec provides sending and receiving data encoded by fast FastLZ algorithm using block mode.

Modifications:

- Added part of `jfastlz` library which implements FastLZ algorithm. See FastLz class.
- Implemented FastLzFramedEncoder which extends MessageToByteEncoder and provides compression of outgoing messages.
- Implemented FastLzFramedDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages.
- Added integration tests for `FastLzFramedEncoder/Decoder`.

Result:

Full FastLZ compression codec which can compress/uncompress data using FastLZ algorithm.
2014-08-12 15:14:59 -07:00
Norman Maurer
e1cc1fbabc [#2705] Call fireChannelReadComplete() if channelActive(...) decodes messages in ReplayingDecoder / ByteToMessageDecoder
Motivation:

In ReplayingDecoder / ByteToMessageDecoder channelInactive(...) method we try to decode a last time and fire all decoded messages throw the pipeline before call ctx.fireChannelInactive(...). To keep the correct order of events we also need to call ctx.fireChannelReadComplete() if we read anything.

Modifications:

- Channel channelInactive(...) to call ctx.fireChannelReadComplete() if something was decoded
- Move out.recycle() to finally block

Result:

Correct order of events.
2014-07-24 14:38:46 +02:00
Idel Pivnitskiy
4816533638 Refactor Bzip2 tests
Motivation:

Complicated code of Bzip2 tests with some unnecessary actions.

Modifications:

- Reduce size of BYTES_LARGE array of random test data for Bzip2  tests.
- Removed unnecessary creations of EmbeddedChannel instances in Bzip2 tests.
- Simplified tests in Bzip2DecoderTest which expect exception.
- Removed unnecessary testStreamInitialization() from Bzip2EncoderTest.

Result:

Reduced time to test the 'codec' package by 7 percent, simplified code of Bzip2 tests.
2014-07-23 19:46:00 +02:00
Idel Pivnitskiy
99cf6f0732 Refactor integration tests of compression codecs
Motivation:

Duplicated code of integration tests for different compression codecs.

Modifications:

- Added abstract class IntegrationTest which contains common tests for any compression codec.
- Removed common tests from Bzip2IntegrationTest and LzfIntegrationTest.
- Implemented abstract methods of IntegrationTest in Bzip2IntegrationTest, LzfIntegrationTest and SnappyIntegrationTest.

Result:

Removed duplicated code of integration tests for compression codecs and simplified an addition of integration tests for new compression codecs.
2014-07-23 19:44:10 +02:00
Idel Pivnitskiy
ca87cc887e Simplify Bzip2 tests
Motivation:

Sometimes we have a 'build time out' error because tests for bzip2 codec take a long time.

Modifications:

Removed cycles from Bzip2EncoderTest.testCompression(byte[]) and Bzip2DecoderTest.testDecompression(byte[]).

Result:

Reduced time to test the 'codec' package by 30 percent.
2014-07-22 18:00:26 +02:00
Norman Maurer
53141b04a8 Fix buffer leak in Bzip2EncoderTest 2014-07-19 14:41:18 +02:00
Idel Pivnitskiy
ed7240b597 Implemented a Bzip2Encoder
Motivation:

Bzip2Encoder provides sending data compressed in bzip2 format.

Modifications:

Added classes:
- Bzip2Encoder
- Bzip2BitWriter
- Bzip2BlockCompressor
- Bzip2DivSufSort
- Bzip2HuffmanAllocator
- Bzip2HuffmanStageEncoder
- Bzip2MTFAndRLE2StageEncoder
- Bzip2EncoderTest

Modified classes:
- Bzip2Constants (splited BLOCK_HEADER_MAGIC and END_OF_STREAM_MAGIC)
- Bzip2Decoder (use splited magic numbers)

Added integration tests for Bzip2Encoder/Decoder

Result:

Implemented new encoder which can compress outgoing data in bzip2 format.
2014-07-17 16:19:39 +02:00
Idel Pivnitskiy
3c6017a9b1 Implemented LZF compression codec
Motivation:

LZF compression codec provides sending and receiving data encoded by very fast LZF algorithm.

Modifications:

- Added Compress-LZF library which implements LZF algorithm
- Implemented LzfEncoder which extends MessageToByteEncoder and provides compression of outgoing messages
- Added tests to verify the LzfEncoder and how it can compress data for the next uncompression using the original library
- Implemented LzfDecoder which extends ByteToMessageDecoder and provides uncompression of incoming messages
- Added tests to verify the LzfDecoder and how it can uncompress data after compression using the original library
- Added integration tests for LzfEncoder/Decoder

Result:

Full LZF compression codec which can compress/uncompress data using LZF algorithm.
2014-07-17 07:18:07 +02:00
Trustin Lee
97825598d2 Fix another buffer leaks in JsonObjectDecoderTest 2014-07-04 16:12:06 +09:00
Trustin Lee
d0b355b26e Fix a buffer leak in JsonObjectDecoderTest 2014-07-03 19:58:06 +09:00
Jakob Buchgraber
aed13ba5ef Split a JSON byte stream into JSON objects/arrays. Fixes #2536
Motivation:

See GitHub Issue #2536.

Modifications:

Introduce the class JsonObjectDecoder to split a JSON byte stream
into individual JSON objets/arrays.

Result:

A Netty application can now handle a byte stream where multiple JSON
documents follow eachother as opposed to only a single JSON document
per request.
2014-07-03 18:34:29 +09:00
Trustin Lee
d0912f2709 Fix most inspector warnings
Motivation:

It's good to minimize potentially broken windows.

Modifications:

Fix most inspector warnings from our profile
Update IntObjectHashMap

Result:

Cleaner code
2014-07-02 19:55:07 +09:00
Trustin Lee
5f889d92a1 Fix buffer leaks in Bzip2Decoder(Test)
If decompression fails, the buffer that contains the decompressed data
is not released.  Bzip2DecoderTest.testStreamCrcError() also does not
release the partial output Bzip2Decoder produces.
2014-06-26 17:48:32 +09:00
Norman Maurer
56d732d439 Fix buffer leaks in Bzip2DecoderTest 2014-06-26 09:21:44 +02:00
Trustin Lee
bf85af5743 Fix buffer leaks in Bzip2DecoderTest 2014-06-24 16:47:14 +09:00
Idel Pivnitskiy
f9021a6061 Implement a Bzip2Decoder
Motivation:

Bzip2Decoder provides receiving data compressed in bzip2 format.

Modifications:

Added classes:
- Bzip2Decoder
- Bzip2Constants
- Bzip2BlockDecompressor
- Bzip2HuffmanStageDecoder
- Bzip2MoveToFrontTable
- Bzip2Rand
- Crc32
- Bzip2DecoderTest

Result:

Implemented and tested new decoder which can uncompress incoming data in bzip2 format.
2014-06-24 14:50:09 +09:00
Trustin Lee
8c25830b0b Move haproxy codec to a separate module 2014-06-21 15:59:21 +09:00
Jon Keys
d7b2affe32 Add HAProxy protocol decoder
Motivation:

The proxy protocol provides client connection information for proxied
network services. Several implementations exist (e.g. Haproxy, Stunnel,
Stud, Postfix), but the primary motivation for this implementation is to
support the proxy protocol feature of Amazon Web Services Elastic Load
Balancing.

Modifications:

This commit adds a proxy protocol decoder for proxy protocol version 1
as specified at:

  http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt

The foundation for version 2 support is also in this commit but it is
explicitly NOT supported due to a lack of external implementations to
test against.

Result:

The proxy protocol decoder can be used to send client connection
information to inbound handlers in a channel pipeline from services
which support the proxy protocol.
2014-06-21 15:59:21 +09:00
Norman Maurer
984b0aa961 [#2572] Correctly calculate length of output buffer before inflate to fix IndexOutOfBoundException
Motivation:

JdkZlibDecoder fails to decode because the length of the output buffer is not calculated correctly.
This can cause an IndexOutOfBoundsException or data-corruption when the PooledByteBuffAllocator is used.

Modifications:

Correctly calculate the length

Result:

No more IndexOutOfBoundsException or data-corruption.
2014-06-16 10:17:02 +02:00
Martin Krüger
d854d3a617 Fix chunk type for stream identifier
Motivation:
The problem with the current snappy implementation is that it does
not comply with framing format definition found on
https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt

The document describes that chunk type of the stream identifier is defined
as 0xff. The current implentation uses 0x80.

Modifications:
This patch replaces the first byte of the chunk type of the stream identifier
with 0xff.

Result:
After this modification the snappy implementation is compliant to the
framing format described at
https://code.google.com/p/snappy/source/browse/trunk/framing_format.txt.
This results in a better compatibility with other implementations.
2014-04-19 21:06:28 +02:00
Norman Maurer
99995876dc Fix buffer leak in test which was introduced while implement ZLIB_OR_NONE support. Related to [#2269] 2014-03-10 06:25:42 +01:00
Norman Maurer
d89bfc593e Fix buffer leak in test which was introduced while implement ZLIB_OR_NONE support. Related to [#2269] 2014-03-06 20:13:30 +01:00
Jakob Buchgraber
9fb235459e Add ZLIB_OR_NONE support to JdkZlibDecoder [#2016] 2014-03-03 06:37:47 +01:00
Trustin Lee
a0378af850 Fix resource leaks in ByteArrayEncoderTest 2014-02-16 11:50:09 -08:00
Trustin Lee
df346a023b Change the return type of EmbeddedChannel.read*() from Object to an ad-hoc type parameter
.. so that there's no need to explicitly down-cast.

Fixes #2067
2014-02-13 17:19:26 -08:00
Trustin Lee
5e69955d23 Fix another buffer leak in XmlFrameDecoderTest 2014-02-13 17:15:06 -08:00
Trustin Lee
457cd2f6fa Fix buffer leaks in XmlFrameDecoderTest 2014-02-13 17:14:59 -08:00
Trustin Lee
502ccabab3 Fix inspector warnings 2014-02-13 17:13:55 -08:00
Mirko Caserta
ee8571824b CDATA support 2014-02-13 17:13:49 -08:00
Mirko Caserta
086dbd1ba1 Fixed the XML decoder 2014-02-13 17:13:39 -08:00
Trustin Lee
2d5a3b5898 Add XML decoder
- based on @mcaserta's work at https://github.com/netty/netty/pull/1121
- not ready for a merge yet
2014-02-13 17:13:29 -08:00
Norman Maurer
573b54a93d [#1907] LengthFieldPrepender should better extend MessageToMessageEncoder for less memory copies 2014-02-13 14:52:12 -08:00
Vladimir Schafer
3d531231fe #2183 Fix for releasing of the internal cumulation buffer in ByteToMessageDecoder 2014-02-06 20:07:56 +01:00
Norman Maurer
0f7379157a [#2168] Eliminate unnessary memory copy for heap buffers in JdkZlibEncoder
* Also adjust tests so it test with direct and heap buffers
2014-01-30 07:02:14 +01:00
Norman Maurer
b3d8c81557 Fix all leaks reported during tests
- One notable leak is from WebSocketFrameAggregator
- All other leaks are from tests
2013-12-07 00:44:56 +09:00
Trustin Lee
3f7b674db8 Fix bugs in ZLIB codec where they produce malformed stream or their streams are not flushed on time
- Fixes #2014
- Add the tests that mix JDK ZLIB codec and JZlib codecs
- Fix a bug where JdkZlibEncoder does not encode the GZIP header when nothing was written to te channel
- Fix a bug where the encoders do not consider the overhead of the wrapper format when calculating the estimated compressed output size.
- Fix a bug where the decoders do not discard the received data after the compressed stream is finished
2013-11-29 18:09:04 +09:00