Use more aggressive expanding strategy in HpackHuffmanDecoder

Motivation:

Before we always expanded the buffer by the initialCapacity which by default is 32 bytes. This may lead to many expansions of the buffer until we finally reached the point that the buffer can fin everything.

Modifications:

Double the buffer size until the threshold of >= 1024 is hit. After this will grow it by the initialCapacity

Result:

Less expansion of the buffer (and so allocations / copies) when the intialCapacity is not big enough. Fixes [#6864].
This commit is contained in:
Norman Maurer 2017-06-15 06:51:13 +02:00
parent e597756a56
commit 575baf5050

View File

@ -231,8 +231,11 @@ final class HpackHuffmanDecoder {
private void append(int i) { private void append(int i) {
if (bytes.length == index) { if (bytes.length == index) {
// Always just expand by INITIAL_SIZE // Choose an expanding strategy depending on how big the buffer already is.
byte[] newBytes = new byte[bytes.length + initialCapacity]; // 1024 was choosen as a good guess and we may be able to investigate more if there are better choices.
// See also https://github.com/netty/netty/issues/6846
final int newLength = bytes.length >= 1024 ? bytes.length + initialCapacity : bytes.length << 1;
byte[] newBytes = new byte[newLength];
System.arraycopy(bytes, 0, newBytes, 0, bytes.length); System.arraycopy(bytes, 0, newBytes, 0, bytes.length);
bytes = newBytes; bytes = newBytes;
} }