Motivation:
`Date`, `Expires`, and `Set-Cookie` headers are being generated with a 1-digit day of month,
e.g. `Sun, 6 Nov 1994 08:49:37 GMT`. RFC 2616 specifies that `Date` and `Expires` headers should
use "a fixed-length subset of that defined by RFC 1123" which includes a 2-digit day of month.
RFC6265 is lax in it's specification of the `Set-Cookie` header and permits a 2-digit day of month.
See: https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html
See: https://tools.ietf.org/html/rfc1123#page-55
See: https://tools.ietf.org/html/rfc6265#section-5.1.1
Modifications:
- Update `DateFormatter` to correctly implement RFC 2616 headers
Result:
```
Date: Sun, 06 Nov 1994 08:49:37 GMT
Expires: Sun, 06 Nov 1994 08:49:37 GMT
Set-Cookie: id=a3fWa; Expires=Sun, 06 Nov 1994 08:49:37 GMT
```
* Motivation:
JsonObjectDecoderTest did include 3 println(...) call which was leftover from debugging.
Modifications:
Removed println(...)
Result:
Cleanup
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
`io.netty.channel.ChannelHandler` is never used in JsonObjectDecoder.java.
Modification:
Just remove this unused import.
Result:
Make the JsonObjectDecoder.java's imports simple and clean.
Motivation:
To ensure we always recycle the CodecOutputList we should better do it in a finally block
Modifications:
Call CodecOutputList.recycle() in finally
Result:
Less chances of non-recycled lists. Related to https://github.com/netty/netty/issues/10183
Motivation:
In the code example of ReplayingDecoder, an input parameter List<Object> out is missing.
Modification:
Just add this parameter.
Result:
The right doc.
Motivation:
Since the LZF support non-compress and compress format, we can let LzfEncoder support length aware ability. It can let the user control compress.
Modification:
When the data length over compressThreshold, LzfEncoder use compress format to compress data. Otherwise, only use non-compress format. Whatever compress format the encoder use, the LzfDecoder can decompress data well.
Result:
Gives users control over compression capabilities
Motivation:
The Snappy crc32c checksum produced by SnappyFrameEncoder maybe failed to be validated on other languages snappy decoder, such as golang/snappy.
Modification:
- make the 4-byte cast later after the mask operation. Because whether retaining the higher 4-7 bytes in a long java type will make difference in (checksum >> 15 | checksum << 17) + 0xa282ead8 result.
Result:
Checksum correctly calculated
Motivation:
It is impossible to know in advance how much memory will be needed to
decompress a stream of bytes that was compressed using the DEFLATE
algorithm. In theory, up to 1032 times the compressed size could be
needed. For untrusted input, an attacker could exploit this to exhaust
the memory pool.
Modifications:
ZlibDecoder and its subclasses now support an optional limit on the size
of the decompressed buffer. By default, if the limit is reached,
decompression stops and a DecompressionException is thrown. Behavior
upon reaching the limit is modifiable by subclasses in case they desire
something else.
Result:
The decompressed buffer can now be limited to a configurable size, thus
mitigating the possibility of memory pool exhaustion.
Motivation:
We should close encoder when `LzfEncoder` was removed from pipeline.
Modification:
call `encoder.close` when `handlerRemoved` triggered.
Result:
Close encoder to release internal buffer.
Motivation
This PR is a reduced-scope replacement for #8931. It doesn't include the
changes related to how/when discarding read bytes is done, which we plan
to address in subsequent updates.
Modifications
- Avoid copying bytes in COMPOSITE_CUMULATOR in all cases, performing a
shallow copy where necessary; also guard against (unusual) case where
input buffer is composite with writer index != capacity
- Ensure we don't pass a non-contiguous buffer when MERGE_CUMULATOR is
used
- Manually inline some calls to ByteBuf#writeBytes(...) to eliminate
redundant checks and reduce stack depth
Also includes prior minor review comments from @trustin
Result
More correct handling of merge/composite cases and
more efficient handling of composite case.
Motivation:
ByteToMessageDecoder's default MERGE_CUMULATOR will allocate a new buffer and
copy if the refCnt() of the cumulation is > 1. However this is overly
conservative because we maybe able to avoid allocate/copy if the current
cumulation can accommodate the input buffer without a reallocation. Also when the
reallocation and copy does occur the new buffer is sized just large enough to
accommodate the current the current amount of data. If some data remains in the
cumulation after decode this will require a new allocation/copy when more data
arrives.
Modifications:
- Use maxFastWritableBytes to avoid allocation/copy if the current buffer can
accommodate the input data without a reallocation operation.
- Use ByteBufAllocator#calculateNewCapacity(..) to get the size of the buffer
when a reallocation/copy operation is necessary.
Result:
ByteToMessageDecoder MERGE_CUMULATOR won't allocate/copy if the cumulation
buffer can accommodate data without a reallocation, and when a reallocation
occurs we are more likely to leave additional space for future data in an effort
to reduce overall reallocations.
Motivation:
SnappyFrameDecoderTest has a few tests which fail to close the EmbeddedChannel
and therefore may leak ByteBuf objects.
Modifications:
- Make sure EmbeddedChannel#finishAndReleaseAll() is called in all tests
Result:
No more leaks from SnappyFrameDecoderTest.
Motivation:
We did not correctly close the `EmbeddedChannel` which would lead to not have `handlerRemoved(...)` called. This can lead to leaks. Beside this we also did not correctly consume produced data which could also show up as a leak.
Modifications:
- Always call `EmbeddedChannel.finish()`
- Ensure we consume all produced data and release it
Result:
No more leaks in test. This showed up in https://github.com/netty/netty/pull/9850#issuecomment-562504863.
Motivation:
The buffer which the decoder allocates for the expansion can be
leaked if there is a subsequent issue writing to it.
Modifications:
The error handling has been improved so that the new buffer always
is released on failure in the expand.
Result:
The decoder will not leak in this scenario any more.
Fixes: https://github.com/netty/netty/issues/9812
Motivation:
Data flowing in from the decoder flows out in sequence,Whether decoder removed or not.
Modification:
fire data in out and clear out when hander removed
before call method handlerRemoved(ctx)
Result:
Fixes#9668 .
Motivation:
At the moment we do a ByteBuf.readBytes(...) on removal of the ByteToMessageDecoder if there are any bytes left and forward the returned ByteBuf to the next handler in the pipeline. This is not really needed as we can just forward the cumulation buffer directly and so eliminate the extra memory copy
Modifications:
Just forward the cumulation buffer directly on removal of the ByteToMessageDecoder
Result:
Less memory copies
Motivation:
We can use the `@SuppressJava6Requirement` annotation to be more precise about when we use Java6+ APIs. This helps us to ensure we always protect these places.
Modifications:
Make use of `@SuppressJava6Requirement` explicit
Result:
Fixes https://github.com/netty/netty/issues/2509.
Motivation:
In the current implementation of Base64 decoder an invalid
character `\u00BD` treated as `=`.
Also character `\u007F` leads to ArrayIndexOutOfBoundsException.
Modification:
Explicitly checks that all input bytes are ASCII characters
(greater than zero). Fix `decodabet` tables.
Result:
Correctly validation input bytes in Base64 decoder.
Motivation:
Netty homepage(netty.io) serves both "http" and "https".
It's recommended to use https than http.
Modification:
I changed from "http://netty.io" to "https://netty.io"
Result:
No effects.