netty5/buffer/src/main/java/io/netty/buffer/ByteBufUtil.java

1268 lines
49 KiB
Java
Raw Normal View History

/*
* Copyright 2012 The Netty Project
*
* The Netty Project licenses this file to you under the Apache License,
* version 2.0 (the "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at:
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations
* under the License.
*/
package io.netty.buffer;
import io.netty.util.AsciiString;
import io.netty.util.ByteProcessor;
import io.netty.util.CharsetUtil;
import io.netty.util.Recycler;
import io.netty.util.Recycler.Handle;
import io.netty.util.concurrent.FastThreadLocal;
import io.netty.util.internal.PlatformDependent;
import io.netty.util.internal.StringUtil;
import io.netty.util.internal.SystemPropertyUtil;
import io.netty.util.internal.logging.InternalLogger;
import io.netty.util.internal.logging.InternalLoggerFactory;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.CharBuffer;
import java.nio.charset.CharacterCodingException;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import java.nio.charset.CoderResult;
import java.nio.charset.CodingErrorAction;
import java.util.Arrays;
import java.util.Locale;
import static io.netty.util.internal.MathUtil.isOutOfBounds;
import static io.netty.util.internal.ObjectUtil.checkNotNull;
import static io.netty.util.internal.StringUtil.NEWLINE;
import static io.netty.util.internal.StringUtil.isSurrogate;
/**
* A collection of utility methods that is related with handling {@link ByteBuf},
* such as the generation of hex dump and swapping an integer's byte order.
*/
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
public final class ByteBufUtil {
private static final InternalLogger logger = InternalLoggerFactory.getInstance(ByteBufUtil.class);
private static final FastThreadLocal<CharBuffer> CHAR_BUFFERS = new FastThreadLocal<CharBuffer>() {
@Override
protected CharBuffer initialValue() throws Exception {
return CharBuffer.allocate(1024);
}
};
private static final byte WRITE_UTF_UNKNOWN = (byte) '?';
private static final int MAX_CHAR_BUFFER_SIZE;
private static final int THREAD_LOCAL_BUFFER_SIZE;
private static final int MAX_BYTES_PER_CHAR_UTF8 =
(int) CharsetUtil.encoder(CharsetUtil.UTF_8).maxBytesPerChar();
static final ByteBufAllocator DEFAULT_ALLOCATOR;
static {
String allocType = SystemPropertyUtil.get(
"io.netty.allocator.type", PlatformDependent.isAndroid() ? "unpooled" : "pooled");
allocType = allocType.toLowerCase(Locale.US).trim();
ByteBufAllocator alloc;
if ("unpooled".equals(allocType)) {
alloc = UnpooledByteBufAllocator.DEFAULT;
logger.debug("-Dio.netty.allocator.type: {}", allocType);
} else if ("pooled".equals(allocType)) {
alloc = PooledByteBufAllocator.DEFAULT;
logger.debug("-Dio.netty.allocator.type: {}", allocType);
} else {
alloc = PooledByteBufAllocator.DEFAULT;
logger.debug("-Dio.netty.allocator.type: pooled (unknown: {})", allocType);
}
DEFAULT_ALLOCATOR = alloc;
THREAD_LOCAL_BUFFER_SIZE = SystemPropertyUtil.getInt("io.netty.threadLocalDirectBufferSize", 64 * 1024);
logger.debug("-Dio.netty.threadLocalDirectBufferSize: {}", THREAD_LOCAL_BUFFER_SIZE);
MAX_CHAR_BUFFER_SIZE = SystemPropertyUtil.getInt("io.netty.maxThreadLocalCharBufferSize", 16 * 1024);
logger.debug("-Dio.netty.maxThreadLocalCharBufferSize: {}", MAX_CHAR_BUFFER_SIZE);
}
/**
* Returns a <a href="http://en.wikipedia.org/wiki/Hex_dump">hex dump</a>
* of the specified buffer's readable bytes.
*/
public static String hexDump(ByteBuf buffer) {
return hexDump(buffer, buffer.readerIndex(), buffer.readableBytes());
}
/**
* Returns a <a href="http://en.wikipedia.org/wiki/Hex_dump">hex dump</a>
* of the specified buffer's sub-region.
*/
public static String hexDump(ByteBuf buffer, int fromIndex, int length) {
return HexUtil.hexDump(buffer, fromIndex, length);
}
/**
* Returns a <a href="http://en.wikipedia.org/wiki/Hex_dump">hex dump</a>
* of the specified byte array.
*/
public static String hexDump(byte[] array) {
return hexDump(array, 0, array.length);
}
/**
* Returns a <a href="http://en.wikipedia.org/wiki/Hex_dump">hex dump</a>
* of the specified byte array's sub-region.
*/
public static String hexDump(byte[] array, int fromIndex, int length) {
return HexUtil.hexDump(array, fromIndex, length);
}
/**
* Decode a 2-digit hex byte from within a string.
*/
public static byte decodeHexByte(CharSequence s, int pos) {
return StringUtil.decodeHexByte(s, pos);
}
/**
* Decodes a string generated by {@link #hexDump(byte[])}
*/
public static byte[] decodeHexDump(CharSequence hexDump) {
return StringUtil.decodeHexDump(hexDump, 0, hexDump.length());
}
/**
* Decodes part of a string generated by {@link #hexDump(byte[])}
*/
public static byte[] decodeHexDump(CharSequence hexDump, int fromIndex, int length) {
return StringUtil.decodeHexDump(hexDump, fromIndex, length);
}
/**
* Calculates the hash code of the specified buffer. This method is
* useful when implementing a new buffer type.
*/
public static int hashCode(ByteBuf buffer) {
final int aLen = buffer.readableBytes();
final int intCount = aLen >>> 2;
final int byteCount = aLen & 3;
int hashCode = 1;
int arrayIndex = buffer.readerIndex();
if (buffer.order() == ByteOrder.BIG_ENDIAN) {
for (int i = intCount; i > 0; i --) {
hashCode = 31 * hashCode + buffer.getInt(arrayIndex);
arrayIndex += 4;
}
} else {
for (int i = intCount; i > 0; i --) {
hashCode = 31 * hashCode + swapInt(buffer.getInt(arrayIndex));
arrayIndex += 4;
}
}
for (int i = byteCount; i > 0; i --) {
hashCode = 31 * hashCode + buffer.getByte(arrayIndex ++);
}
if (hashCode == 0) {
hashCode = 1;
}
return hashCode;
}
/**
* Returns the reader index of needle in haystack, or -1 if needle is not in haystack.
*/
public static int indexOf(ByteBuf needle, ByteBuf haystack) {
// TODO: maybe use Boyer Moore for efficiency.
int attempts = haystack.readableBytes() - needle.readableBytes() + 1;
for (int i = 0; i < attempts; i++) {
if (equals(needle, needle.readerIndex(),
haystack, haystack.readerIndex() + i,
needle.readableBytes())) {
return haystack.readerIndex() + i;
}
}
return -1;
}
/**
* Returns {@code true} if and only if the two specified buffers are
* identical to each other for {@code length} bytes starting at {@code aStartIndex}
* index for the {@code a} buffer and {@code bStartIndex} index for the {@code b} buffer.
* A more compact way to express this is:
* <p>
* {@code a[aStartIndex : aStartIndex + length] == b[bStartIndex : bStartIndex + length]}
*/
public static boolean equals(ByteBuf a, int aStartIndex, ByteBuf b, int bStartIndex, int length) {
if (aStartIndex < 0 || bStartIndex < 0 || length < 0) {
throw new IllegalArgumentException("All indexes and lengths must be non-negative");
}
if (a.writerIndex() - length < aStartIndex || b.writerIndex() - length < bStartIndex) {
return false;
}
final int longCount = length >>> 3;
final int byteCount = length & 7;
if (a.order() == b.order()) {
for (int i = longCount; i > 0; i --) {
if (a.getLong(aStartIndex) != b.getLong(bStartIndex)) {
return false;
}
aStartIndex += 8;
bStartIndex += 8;
}
} else {
for (int i = longCount; i > 0; i --) {
if (a.getLong(aStartIndex) != swapLong(b.getLong(bStartIndex))) {
return false;
}
aStartIndex += 8;
bStartIndex += 8;
}
}
for (int i = byteCount; i > 0; i --) {
if (a.getByte(aStartIndex) != b.getByte(bStartIndex)) {
return false;
}
aStartIndex ++;
bStartIndex ++;
}
return true;
}
/**
* Returns {@code true} if and only if the two specified buffers are
* identical to each other as described in {@link ByteBuf#equals(Object)}.
* This method is useful when implementing a new buffer type.
*/
public static boolean equals(ByteBuf bufferA, ByteBuf bufferB) {
final int aLen = bufferA.readableBytes();
if (aLen != bufferB.readableBytes()) {
return false;
}
return equals(bufferA, bufferA.readerIndex(), bufferB, bufferB.readerIndex(), aLen);
}
/**
* Compares the two specified buffers as described in {@link ByteBuf#compareTo(ByteBuf)}.
* This method is useful when implementing a new buffer type.
*/
public static int compare(ByteBuf bufferA, ByteBuf bufferB) {
final int aLen = bufferA.readableBytes();
final int bLen = bufferB.readableBytes();
final int minLength = Math.min(aLen, bLen);
final int uintCount = minLength >>> 2;
final int byteCount = minLength & 3;
int aIndex = bufferA.readerIndex();
int bIndex = bufferB.readerIndex();
if (uintCount > 0) {
boolean bufferAIsBigEndian = bufferA.order() == ByteOrder.BIG_ENDIAN;
final long res;
int uintCountIncrement = uintCount << 2;
if (bufferA.order() == bufferB.order()) {
res = bufferAIsBigEndian ? compareUintBigEndian(bufferA, bufferB, aIndex, bIndex, uintCountIncrement) :
compareUintLittleEndian(bufferA, bufferB, aIndex, bIndex, uintCountIncrement);
} else {
res = bufferAIsBigEndian ? compareUintBigEndianA(bufferA, bufferB, aIndex, bIndex, uintCountIncrement) :
compareUintBigEndianB(bufferA, bufferB, aIndex, bIndex, uintCountIncrement);
}
if (res != 0) {
// Ensure we not overflow when cast
return (int) Math.min(Integer.MAX_VALUE, Math.max(Integer.MIN_VALUE, res));
}
aIndex += uintCountIncrement;
bIndex += uintCountIncrement;
}
for (int aEnd = aIndex + byteCount; aIndex < aEnd; ++aIndex, ++bIndex) {
int comp = bufferA.getUnsignedByte(aIndex) - bufferB.getUnsignedByte(bIndex);
if (comp != 0) {
return comp;
}
}
return aLen - bLen;
}
private static long compareUintBigEndian(
ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) {
for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) {
long comp = bufferA.getUnsignedInt(aIndex) - bufferB.getUnsignedInt(bIndex);
if (comp != 0) {
return comp;
}
}
return 0;
}
private static long compareUintLittleEndian(
ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) {
for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) {
long comp = bufferA.getUnsignedIntLE(aIndex) - bufferB.getUnsignedIntLE(bIndex);
if (comp != 0) {
return comp;
}
}
return 0;
}
private static long compareUintBigEndianA(
ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) {
for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) {
long comp = bufferA.getUnsignedInt(aIndex) - bufferB.getUnsignedIntLE(bIndex);
if (comp != 0) {
return comp;
}
}
return 0;
}
private static long compareUintBigEndianB(
ByteBuf bufferA, ByteBuf bufferB, int aIndex, int bIndex, int uintCountIncrement) {
for (int aEnd = aIndex + uintCountIncrement; aIndex < aEnd; aIndex += 4, bIndex += 4) {
long comp = bufferA.getUnsignedIntLE(aIndex) - bufferB.getUnsignedInt(bIndex);
if (comp != 0) {
return comp;
}
}
return 0;
}
/**
* The default implementation of {@link ByteBuf#indexOf(int, int, byte)}.
* This method is useful when implementing a new buffer type.
*/
public static int indexOf(ByteBuf buffer, int fromIndex, int toIndex, byte value) {
if (fromIndex <= toIndex) {
return firstIndexOf(buffer, fromIndex, toIndex, value);
} else {
return lastIndexOf(buffer, fromIndex, toIndex, value);
}
}
/**
* Toggles the endianness of the specified 16-bit short integer.
*/
public static short swapShort(short value) {
return Short.reverseBytes(value);
}
/**
* Toggles the endianness of the specified 24-bit medium integer.
*/
public static int swapMedium(int value) {
int swapped = value << 16 & 0xff0000 | value & 0xff00 | value >>> 16 & 0xff;
if ((swapped & 0x800000) != 0) {
swapped |= 0xff000000;
}
return swapped;
}
/**
* Toggles the endianness of the specified 32-bit integer.
*/
public static int swapInt(int value) {
return Integer.reverseBytes(value);
}
/**
* Toggles the endianness of the specified 64-bit long integer.
*/
public static long swapLong(long value) {
return Long.reverseBytes(value);
}
/**
* Read the given amount of bytes into a new {@link ByteBuf} that is allocated from the {@link ByteBufAllocator}.
*/
public static ByteBuf readBytes(ByteBufAllocator alloc, ByteBuf buffer, int length) {
boolean release = true;
ByteBuf dst = alloc.buffer(length);
try {
buffer.readBytes(dst);
release = false;
return dst;
} finally {
if (release) {
dst.release();
}
}
}
private static int firstIndexOf(ByteBuf buffer, int fromIndex, int toIndex, byte value) {
fromIndex = Math.max(fromIndex, 0);
if (fromIndex >= toIndex || buffer.capacity() == 0) {
return -1;
}
return buffer.forEachByte(fromIndex, toIndex - fromIndex, new ByteProcessor.IndexOfProcessor(value));
}
private static int lastIndexOf(ByteBuf buffer, int fromIndex, int toIndex, byte value) {
fromIndex = Math.min(fromIndex, buffer.capacity());
if (fromIndex < 0 || buffer.capacity() == 0) {
return -1;
}
return buffer.forEachByteDesc(toIndex, fromIndex - toIndex, new ByteProcessor.IndexOfProcessor(value));
}
/**
* Encode a {@link CharSequence} in <a href="http://en.wikipedia.org/wiki/UTF-8">UTF-8</a> and write
* it to a {@link ByteBuf} allocated with {@code alloc}.
* @param alloc The allocator used to allocate a new {@link ByteBuf}.
* @param seq The characters to write into a buffer.
* @return The {@link ByteBuf} which contains the <a href="http://en.wikipedia.org/wiki/UTF-8">UTF-8</a> encoded
* result.
*/
public static ByteBuf writeUtf8(ByteBufAllocator alloc, CharSequence seq) {
// UTF-8 uses max. 3 bytes per char, so calculate the worst case.
ByteBuf buf = alloc.buffer(utf8MaxBytes(seq));
writeUtf8(buf, seq);
return buf;
}
Provide helper methods in ByteBufUtil to write UTF-8/ASCII CharSequences. Related to [#909] Motivation: We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII. Modifications: Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf. Result: Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do. The following benchmark shows the improvements: Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average] Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914 Confidence interval (99.9%): [2397800.596, 2515933.336] Benchmark Mode Samples Score Score error Units i.n.m.b.ByteBufUtilBenchmark.writeAscii thrpt 50 9398165.238 131503.098 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiString thrpt 50 9695177.968 176684.821 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray thrpt 50 4788597.415 83181.549 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped thrpt 50 4722297.435 98984.491 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped thrpt 50 4028689.762 66192.505 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray thrpt 50 3234841.565 91308.009 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped thrpt 50 3311387.474 39018.933 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped thrpt 50 3379764.250 66735.415 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8 thrpt 50 5671116.821 101760.081 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8String thrpt 50 5682733.440 111874.084 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray thrpt 50 3564548.995 55709.512 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped thrpt 50 3621053.671 47632.820 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped thrpt 50 2634029.071 52304.876 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray thrpt 50 3397049.332 57784.119 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped thrpt 50 3318685.262 35869.562 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped thrpt 50 2473791.249 46423.114 ops/s Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
2014-09-04 16:04:33 +02:00
/**
* Encode a {@link CharSequence} in <a href="http://en.wikipedia.org/wiki/UTF-8">UTF-8</a> and write
* it to a {@link ByteBuf}.
*
* This method returns the actual number of bytes written.
*/
public static int writeUtf8(ByteBuf buf, CharSequence seq) {
final int len = seq.length();
buf.ensureWritable(utf8MaxBytes(seq));
for (;;) {
if (buf instanceof AbstractByteBuf) {
AbstractByteBuf byteBuf = (AbstractByteBuf) buf;
int written = writeUtf8(byteBuf, byteBuf.writerIndex, seq, len);
byteBuf.writerIndex += written;
return written;
} else if (buf instanceof WrappedByteBuf) {
// Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path.
buf = buf.unwrap();
} else {
byte[] bytes = seq.toString().getBytes(CharsetUtil.UTF_8);
buf.writeBytes(bytes);
return bytes.length;
}
}
}
// Fast-Path implementation
static int writeUtf8(AbstractByteBuf buffer, int writerIndex, CharSequence seq, int len) {
int oldWriterIndex = writerIndex;
// We can use the _set methods as these not need to do any index checks and reference checks.
// This is possible as we called ensureWritable(...) before.
for (int i = 0; i < len; i++) {
char c = seq.charAt(i);
if (c < 0x80) {
buffer._setByte(writerIndex++, (byte) c);
} else if (c < 0x800) {
buffer._setByte(writerIndex++, (byte) (0xc0 | (c >> 6)));
buffer._setByte(writerIndex++, (byte) (0x80 | (c & 0x3f)));
} else if (isSurrogate(c)) {
if (!Character.isHighSurrogate(c)) {
buffer._setByte(writerIndex++, WRITE_UTF_UNKNOWN);
continue;
}
final char c2;
try {
// Surrogate Pair consumes 2 characters. Optimistically try to get the next character to avoid
// duplicate bounds checking with charAt. If an IndexOutOfBoundsException is thrown we will
// re-throw a more informative exception describing the problem.
c2 = seq.charAt(++i);
} catch (IndexOutOfBoundsException e) {
buffer._setByte(writerIndex++, WRITE_UTF_UNKNOWN);
break;
}
if (!Character.isLowSurrogate(c2)) {
buffer._setByte(writerIndex++, WRITE_UTF_UNKNOWN);
buffer._setByte(writerIndex++, Character.isHighSurrogate(c2) ? WRITE_UTF_UNKNOWN : c2);
continue;
}
int codePoint = Character.toCodePoint(c, c2);
// See http://www.unicode.org/versions/Unicode7.0.0/ch03.pdf#G2630.
buffer._setByte(writerIndex++, (byte) (0xf0 | (codePoint >> 18)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((codePoint >> 12) & 0x3f)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((codePoint >> 6) & 0x3f)));
buffer._setByte(writerIndex++, (byte) (0x80 | (codePoint & 0x3f)));
} else {
buffer._setByte(writerIndex++, (byte) (0xe0 | (c >> 12)));
buffer._setByte(writerIndex++, (byte) (0x80 | ((c >> 6) & 0x3f)));
buffer._setByte(writerIndex++, (byte) (0x80 | (c & 0x3f)));
Provide helper methods in ByteBufUtil to write UTF-8/ASCII CharSequences. Related to [#909] Motivation: We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII. Modifications: Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf. Result: Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do. The following benchmark shows the improvements: Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average] Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914 Confidence interval (99.9%): [2397800.596, 2515933.336] Benchmark Mode Samples Score Score error Units i.n.m.b.ByteBufUtilBenchmark.writeAscii thrpt 50 9398165.238 131503.098 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiString thrpt 50 9695177.968 176684.821 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray thrpt 50 4788597.415 83181.549 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped thrpt 50 4722297.435 98984.491 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped thrpt 50 4028689.762 66192.505 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray thrpt 50 3234841.565 91308.009 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped thrpt 50 3311387.474 39018.933 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped thrpt 50 3379764.250 66735.415 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8 thrpt 50 5671116.821 101760.081 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8String thrpt 50 5682733.440 111874.084 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray thrpt 50 3564548.995 55709.512 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped thrpt 50 3621053.671 47632.820 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped thrpt 50 2634029.071 52304.876 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray thrpt 50 3397049.332 57784.119 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped thrpt 50 3318685.262 35869.562 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped thrpt 50 2473791.249 46423.114 ops/s Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
2014-09-04 16:04:33 +02:00
}
}
return writerIndex - oldWriterIndex;
Provide helper methods in ByteBufUtil to write UTF-8/ASCII CharSequences. Related to [#909] Motivation: We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII. Modifications: Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf. Result: Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do. The following benchmark shows the improvements: Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average] Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914 Confidence interval (99.9%): [2397800.596, 2515933.336] Benchmark Mode Samples Score Score error Units i.n.m.b.ByteBufUtilBenchmark.writeAscii thrpt 50 9398165.238 131503.098 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiString thrpt 50 9695177.968 176684.821 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray thrpt 50 4788597.415 83181.549 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped thrpt 50 4722297.435 98984.491 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped thrpt 50 4028689.762 66192.505 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray thrpt 50 3234841.565 91308.009 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped thrpt 50 3311387.474 39018.933 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped thrpt 50 3379764.250 66735.415 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8 thrpt 50 5671116.821 101760.081 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8String thrpt 50 5682733.440 111874.084 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray thrpt 50 3564548.995 55709.512 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped thrpt 50 3621053.671 47632.820 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped thrpt 50 2634029.071 52304.876 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray thrpt 50 3397049.332 57784.119 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped thrpt 50 3318685.262 35869.562 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped thrpt 50 2473791.249 46423.114 ops/s Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
2014-09-04 16:04:33 +02:00
}
/**
* Returns max bytes length of UTF8 character sequence.
*/
public static int utf8MaxBytes(CharSequence seq) {
return seq.length() * MAX_BYTES_PER_CHAR_UTF8;
}
/**
* Encode a {@link CharSequence} in <a href="http://en.wikipedia.org/wiki/ASCII">ASCII</a> and write
* it to a {@link ByteBuf} allocated with {@code alloc}.
* @param alloc The allocator used to allocate a new {@link ByteBuf}.
* @param seq The characters to write into a buffer.
* @return The {@link ByteBuf} which contains the <a href="http://en.wikipedia.org/wiki/ASCII">ASCII</a> encoded
* result.
*/
public static ByteBuf writeAscii(ByteBufAllocator alloc, CharSequence seq) {
// ASCII uses 1 byte per char
ByteBuf buf = alloc.buffer(seq.length());
writeAscii(buf, seq);
return buf;
}
Provide helper methods in ByteBufUtil to write UTF-8/ASCII CharSequences. Related to [#909] Motivation: We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII. Modifications: Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf. Result: Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do. The following benchmark shows the improvements: Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average] Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914 Confidence interval (99.9%): [2397800.596, 2515933.336] Benchmark Mode Samples Score Score error Units i.n.m.b.ByteBufUtilBenchmark.writeAscii thrpt 50 9398165.238 131503.098 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiString thrpt 50 9695177.968 176684.821 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray thrpt 50 4788597.415 83181.549 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped thrpt 50 4722297.435 98984.491 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped thrpt 50 4028689.762 66192.505 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray thrpt 50 3234841.565 91308.009 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped thrpt 50 3311387.474 39018.933 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped thrpt 50 3379764.250 66735.415 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8 thrpt 50 5671116.821 101760.081 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8String thrpt 50 5682733.440 111874.084 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray thrpt 50 3564548.995 55709.512 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped thrpt 50 3621053.671 47632.820 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped thrpt 50 2634029.071 52304.876 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray thrpt 50 3397049.332 57784.119 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped thrpt 50 3318685.262 35869.562 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped thrpt 50 2473791.249 46423.114 ops/s Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
2014-09-04 16:04:33 +02:00
/**
* Encode a {@link CharSequence} in <a href="http://en.wikipedia.org/wiki/ASCII">ASCII</a> and write it
* to a {@link ByteBuf}.
*
* This method returns the actual number of bytes written.
*/
public static int writeAscii(ByteBuf buf, CharSequence seq) {
// ASCII uses 1 byte per char
final int len = seq.length();
buf.ensureWritable(len);
if (seq instanceof AsciiString) {
AsciiString asciiString = (AsciiString) seq;
buf.writeBytes(asciiString.array(), asciiString.arrayOffset(), asciiString.length());
Provide helper methods in ByteBufUtil to write UTF-8/ASCII CharSequences. Related to [#909] Motivation: We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII. Modifications: Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf. Result: Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do. The following benchmark shows the improvements: Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average] Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914 Confidence interval (99.9%): [2397800.596, 2515933.336] Benchmark Mode Samples Score Score error Units i.n.m.b.ByteBufUtilBenchmark.writeAscii thrpt 50 9398165.238 131503.098 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiString thrpt 50 9695177.968 176684.821 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray thrpt 50 4788597.415 83181.549 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped thrpt 50 4722297.435 98984.491 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped thrpt 50 4028689.762 66192.505 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray thrpt 50 3234841.565 91308.009 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped thrpt 50 3311387.474 39018.933 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped thrpt 50 3379764.250 66735.415 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8 thrpt 50 5671116.821 101760.081 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8String thrpt 50 5682733.440 111874.084 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray thrpt 50 3564548.995 55709.512 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped thrpt 50 3621053.671 47632.820 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped thrpt 50 2634029.071 52304.876 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray thrpt 50 3397049.332 57784.119 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped thrpt 50 3318685.262 35869.562 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped thrpt 50 2473791.249 46423.114 ops/s Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
2014-09-04 16:04:33 +02:00
} else {
for (;;) {
if (buf instanceof AbstractByteBuf) {
AbstractByteBuf byteBuf = (AbstractByteBuf) buf;
int written = writeAscii(byteBuf, byteBuf.writerIndex, seq, len);
byteBuf.writerIndex += written;
return written;
} else if (buf instanceof WrappedByteBuf) {
// Unwrap as the wrapped buffer may be an AbstractByteBuf and so we can use fast-path.
buf = buf.unwrap();
} else {
buf.writeBytes(seq.toString().getBytes(CharsetUtil.US_ASCII));
}
}
Provide helper methods in ByteBufUtil to write UTF-8/ASCII CharSequences. Related to [#909] Motivation: We expose no methods in ByteBuf to directly write a CharSequence into it. This leads to have the user either convert the CharSequence first to a byte array or use CharsetEncoder. Both cases have some overheads and we can do a lot better for well known Charsets like UTF-8 and ASCII. Modifications: Add ByteBufUtil.writeAscii(...) and ByteBufUtil.writeUtf8(...) which can do the task in an optimized way. This is especially true if the passed in ByteBuf extends AbstractByteBuf which is true for all of our implementations which not wrap another ByteBuf. Result: Writing an ASCII and UTF-8 CharSequence into a AbstractByteBuf is a lot faster then what the user could do by himself as we can make use of some package private methods and so eliminate reference and range checks. When the Charseq is not ASCII or UTF-8 we can still do a very good job and are on par in most of the cases with what the user would do. The following benchmark shows the improvements: Result: 2456866.966 ?(99.9%) 59066.370 ops/s [Average] Statistics: (min, avg, max) = (2297025.189, 2456866.966, 2586003.225), stdev = 78851.914 Confidence interval (99.9%): [2397800.596, 2515933.336] Benchmark Mode Samples Score Score error Units i.n.m.b.ByteBufUtilBenchmark.writeAscii thrpt 50 9398165.238 131503.098 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiString thrpt 50 9695177.968 176684.821 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArray thrpt 50 4788597.415 83181.549 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringViaArrayWrapped thrpt 50 4722297.435 98984.491 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiStringWrapped thrpt 50 4028689.762 66192.505 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArray thrpt 50 3234841.565 91308.009 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiViaArrayWrapped thrpt 50 3311387.474 39018.933 ops/s i.n.m.b.ByteBufUtilBenchmark.writeAsciiWrapped thrpt 50 3379764.250 66735.415 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8 thrpt 50 5671116.821 101760.081 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8String thrpt 50 5682733.440 111874.084 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArray thrpt 50 3564548.995 55709.512 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringViaArrayWrapped thrpt 50 3621053.671 47632.820 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8StringWrapped thrpt 50 2634029.071 52304.876 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArray thrpt 50 3397049.332 57784.119 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8ViaArrayWrapped thrpt 50 3318685.262 35869.562 ops/s i.n.m.b.ByteBufUtilBenchmark.writeUtf8Wrapped thrpt 50 2473791.249 46423.114 ops/s Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1,387.417 sec - in io.netty.microbench.buffer.ByteBufUtilBenchmark Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 Results : Tests run: 1, Failures: 0, Errors: 0, Skipped: 0 The *ViaArray* benchmarks are basically doing a toString().getBytes(Charset) which the others are using ByteBufUtil.write*(...).
2014-09-04 16:04:33 +02:00
}
return len;
}
// Fast-Path implementation
static int writeAscii(AbstractByteBuf buffer, int writerIndex, CharSequence seq, int len) {
// We can use the _set methods as these not need to do any index checks and reference checks.
// This is possible as we called ensureWritable(...) before.
for (int i = 0; i < len; i++) {
buffer._setByte(writerIndex++, (byte) seq.charAt(i));
}
return len;
}
/**
* Encode the given {@link CharBuffer} using the given {@link Charset} into a new {@link ByteBuf} which
* is allocated via the {@link ByteBufAllocator}.
*/
public static ByteBuf encodeString(ByteBufAllocator alloc, CharBuffer src, Charset charset) {
return encodeString0(alloc, false, src, charset, 0);
}
/**
* Encode the given {@link CharBuffer} using the given {@link Charset} into a new {@link ByteBuf} which
* is allocated via the {@link ByteBufAllocator}.
*
* @param alloc The {@link ByteBufAllocator} to allocate {@link ByteBuf}.
* @param src The {@link CharBuffer} to encode.
* @param charset The specified {@link Charset}.
* @param extraCapacity the extra capacity to alloc except the space for decoding.
*/
public static ByteBuf encodeString(ByteBufAllocator alloc, CharBuffer src, Charset charset, int extraCapacity) {
return encodeString0(alloc, false, src, charset, extraCapacity);
}
static ByteBuf encodeString0(ByteBufAllocator alloc, boolean enforceHeap, CharBuffer src, Charset charset,
int extraCapacity) {
final CharsetEncoder encoder = CharsetUtil.encoder(charset);
int length = (int) ((double) src.remaining() * encoder.maxBytesPerChar()) + extraCapacity;
boolean release = true;
final ByteBuf dst;
if (enforceHeap) {
dst = alloc.heapBuffer(length);
} else {
dst = alloc.buffer(length);
}
try {
final ByteBuffer dstBuf = dst.internalNioBuffer(dst.readerIndex(), length);
final int pos = dstBuf.position();
CoderResult cr = encoder.encode(src, dstBuf, true);
if (!cr.isUnderflow()) {
cr.throwException();
}
cr = encoder.flush(dstBuf);
if (!cr.isUnderflow()) {
cr.throwException();
}
2013-11-15 15:07:19 +01:00
dst.writerIndex(dst.writerIndex() + dstBuf.position() - pos);
release = false;
return dst;
} catch (CharacterCodingException x) {
throw new IllegalStateException(x);
} finally {
if (release) {
dst.release();
}
}
}
static String decodeString(ByteBuf src, int readerIndex, int len, Charset charset) {
if (len == 0) {
return StringUtil.EMPTY_STRING;
}
final CharsetDecoder decoder = CharsetUtil.decoder(charset);
final int maxLength = (int) ((double) len * decoder.maxCharsPerByte());
CharBuffer dst = CHAR_BUFFERS.get();
if (dst.length() < maxLength) {
dst = CharBuffer.allocate(maxLength);
if (maxLength <= MAX_CHAR_BUFFER_SIZE) {
CHAR_BUFFERS.set(dst);
}
} else {
dst.clear();
}
if (src.nioBufferCount() == 1) {
// Use internalNioBuffer(...) to reduce object creation.
decodeString(decoder, src.internalNioBuffer(readerIndex, len), dst);
} else {
// We use a heap buffer as CharsetDecoder is most likely able to use a fast-path if src and dst buffers
// are both backed by a byte array.
ByteBuf buffer = src.alloc().heapBuffer(len);
try {
buffer.writeBytes(src, readerIndex, len);
// Use internalNioBuffer(...) to reduce object creation.
decodeString(decoder, buffer.internalNioBuffer(buffer.readerIndex(), len), dst);
} finally {
// Release the temporary buffer again.
buffer.release();
}
}
return dst.flip().toString();
}
private static void decodeString(CharsetDecoder decoder, ByteBuffer src, CharBuffer dst) {
try {
CoderResult cr = decoder.decode(src, dst, true);
if (!cr.isUnderflow()) {
cr.throwException();
}
cr = decoder.flush(dst);
if (!cr.isUnderflow()) {
cr.throwException();
}
} catch (CharacterCodingException x) {
throw new IllegalStateException(x);
}
}
/**
* Returns a cached thread-local direct buffer, if available.
*
* @return a cached thread-local direct buffer, if available. {@code null} otherwise.
*/
public static ByteBuf threadLocalDirectBuffer() {
if (THREAD_LOCAL_BUFFER_SIZE <= 0) {
return null;
}
if (PlatformDependent.hasUnsafe()) {
return ThreadLocalUnsafeDirectByteBuf.newInstance();
} else {
return ThreadLocalDirectByteBuf.newInstance();
}
}
/**
* Create a copy of the underlying storage from {@code buf} into a byte array.
* The copy will start at {@link ByteBuf#readerIndex()} and copy {@link ByteBuf#readableBytes()} bytes.
*/
public static byte[] getBytes(ByteBuf buf) {
return getBytes(buf, buf.readerIndex(), buf.readableBytes());
}
/**
* Create a copy of the underlying storage from {@code buf} into a byte array.
* The copy will start at {@code start} and copy {@code length} bytes.
*/
public static byte[] getBytes(ByteBuf buf, int start, int length) {
return getBytes(buf, start, length, true);
}
/**
* Return an array of the underlying storage from {@code buf} into a byte array.
* The copy will start at {@code start} and copy {@code length} bytes.
* If {@code copy} is true a copy will be made of the memory.
* If {@code copy} is false the underlying storage will be shared, if possible.
*/
public static byte[] getBytes(ByteBuf buf, int start, int length, boolean copy) {
if (isOutOfBounds(start, length, buf.capacity())) {
throw new IndexOutOfBoundsException("expected: " + "0 <= start(" + start + ") <= start + length(" + length
+ ") <= " + "buf.capacity(" + buf.capacity() + ')');
}
if (buf.hasArray()) {
if (copy || start != 0 || length != buf.capacity()) {
int baseOffset = buf.arrayOffset() + start;
return Arrays.copyOfRange(buf.array(), baseOffset, baseOffset + length);
} else {
return buf.array();
}
}
byte[] v = new byte[length];
buf.getBytes(start, v);
return v;
}
/**
* Copies the content of {@code src} to a {@link ByteBuf} using {@link ByteBuf#writeBytes(byte[], int, int)}.
* @param src The source of the data to copy.
* @param srcIdx the starting offset of characters to copy.
* @param dst the destination byte array.
* @param dstIdx the starting offset in the destination byte array.
* @param length the number of characters to copy.
*/
public static void copy(AsciiString src, int srcIdx, ByteBuf dst, int dstIdx, int length) {
if (isOutOfBounds(srcIdx, length, src.length())) {
throw new IndexOutOfBoundsException("expected: " + "0 <= srcIdx(" + srcIdx + ") <= srcIdx + length("
+ length + ") <= srcLen(" + src.length() + ')');
}
checkNotNull(dst, "dst").setBytes(dstIdx, src.array(), srcIdx + src.arrayOffset(), length);
}
/**
* Copies the content of {@code src} to a {@link ByteBuf} using {@link ByteBuf#writeBytes(byte[], int, int)}.
* @param src The source of the data to copy.
* @param srcIdx the starting offset of characters to copy.
* @param dst the destination byte array.
* @param length the number of characters to copy.
*/
public static void copy(AsciiString src, int srcIdx, ByteBuf dst, int length) {
if (isOutOfBounds(srcIdx, length, src.length())) {
throw new IndexOutOfBoundsException("expected: " + "0 <= srcIdx(" + srcIdx + ") <= srcIdx + length("
+ length + ") <= srcLen(" + src.length() + ')');
}
checkNotNull(dst, "dst").writeBytes(src.array(), srcIdx + src.arrayOffset(), length);
}
/**
* Returns a multi-line hexadecimal dump of the specified {@link ByteBuf} that is easy to read by humans.
*/
public static String prettyHexDump(ByteBuf buffer) {
return prettyHexDump(buffer, buffer.readerIndex(), buffer.readableBytes());
}
/**
* Returns a multi-line hexadecimal dump of the specified {@link ByteBuf} that is easy to read by humans,
* starting at the given {@code offset} using the given {@code length}.
*/
public static String prettyHexDump(ByteBuf buffer, int offset, int length) {
return HexUtil.prettyHexDump(buffer, offset, length);
}
/**
* Appends the prettified multi-line hexadecimal dump of the specified {@link ByteBuf} to the specified
* {@link StringBuilder} that is easy to read by humans.
*/
public static void appendPrettyHexDump(StringBuilder dump, ByteBuf buf) {
appendPrettyHexDump(dump, buf, buf.readerIndex(), buf.readableBytes());
}
/**
* Appends the prettified multi-line hexadecimal dump of the specified {@link ByteBuf} to the specified
* {@link StringBuilder} that is easy to read by humans, starting at the given {@code offset} using
* the given {@code length}.
*/
public static void appendPrettyHexDump(StringBuilder dump, ByteBuf buf, int offset, int length) {
HexUtil.appendPrettyHexDump(dump, buf, offset, length);
}
/* Separate class so that the expensive static initialization is only done when needed */
private static final class HexUtil {
private static final char[] BYTE2CHAR = new char[256];
private static final char[] HEXDUMP_TABLE = new char[256 * 4];
private static final String[] HEXPADDING = new String[16];
private static final String[] HEXDUMP_ROWPREFIXES = new String[65536 >>> 4];
private static final String[] BYTE2HEX = new String[256];
private static final String[] BYTEPADDING = new String[16];
static {
final char[] DIGITS = "0123456789abcdef".toCharArray();
for (int i = 0; i < 256; i ++) {
HEXDUMP_TABLE[ i << 1 ] = DIGITS[i >>> 4 & 0x0F];
HEXDUMP_TABLE[(i << 1) + 1] = DIGITS[i & 0x0F];
}
int i;
// Generate the lookup table for hex dump paddings
for (i = 0; i < HEXPADDING.length; i ++) {
int padding = HEXPADDING.length - i;
StringBuilder buf = new StringBuilder(padding * 3);
for (int j = 0; j < padding; j ++) {
buf.append(" ");
}
HEXPADDING[i] = buf.toString();
}
// Generate the lookup table for the start-offset header in each row (up to 64KiB).
for (i = 0; i < HEXDUMP_ROWPREFIXES.length; i ++) {
StringBuilder buf = new StringBuilder(12);
buf.append(NEWLINE);
buf.append(Long.toHexString(i << 4 & 0xFFFFFFFFL | 0x100000000L));
buf.setCharAt(buf.length() - 9, '|');
buf.append('|');
HEXDUMP_ROWPREFIXES[i] = buf.toString();
}
// Generate the lookup table for byte-to-hex-dump conversion
for (i = 0; i < BYTE2HEX.length; i ++) {
BYTE2HEX[i] = ' ' + StringUtil.byteToHexStringPadded(i);
}
// Generate the lookup table for byte dump paddings
for (i = 0; i < BYTEPADDING.length; i ++) {
int padding = BYTEPADDING.length - i;
StringBuilder buf = new StringBuilder(padding);
for (int j = 0; j < padding; j ++) {
buf.append(' ');
}
BYTEPADDING[i] = buf.toString();
}
// Generate the lookup table for byte-to-char conversion
for (i = 0; i < BYTE2CHAR.length; i ++) {
if (i <= 0x1f || i >= 0x7f) {
BYTE2CHAR[i] = '.';
} else {
BYTE2CHAR[i] = (char) i;
}
}
}
private static String hexDump(ByteBuf buffer, int fromIndex, int length) {
if (length < 0) {
throw new IllegalArgumentException("length: " + length);
}
if (length == 0) {
return "";
}
int endIndex = fromIndex + length;
char[] buf = new char[length << 1];
int srcIdx = fromIndex;
int dstIdx = 0;
for (; srcIdx < endIndex; srcIdx ++, dstIdx += 2) {
System.arraycopy(
HEXDUMP_TABLE, buffer.getUnsignedByte(srcIdx) << 1,
buf, dstIdx, 2);
}
return new String(buf);
}
private static String hexDump(byte[] array, int fromIndex, int length) {
if (length < 0) {
throw new IllegalArgumentException("length: " + length);
}
if (length == 0) {
return "";
}
int endIndex = fromIndex + length;
char[] buf = new char[length << 1];
int srcIdx = fromIndex;
int dstIdx = 0;
for (; srcIdx < endIndex; srcIdx ++, dstIdx += 2) {
System.arraycopy(
HEXDUMP_TABLE, (array[srcIdx] & 0xFF) << 1,
buf, dstIdx, 2);
}
return new String(buf);
}
private static String prettyHexDump(ByteBuf buffer, int offset, int length) {
if (length == 0) {
return StringUtil.EMPTY_STRING;
} else {
int rows = length / 16 + (length % 15 == 0? 0 : 1) + 4;
StringBuilder buf = new StringBuilder(rows * 80);
appendPrettyHexDump(buf, buffer, offset, length);
return buf.toString();
}
}
private static void appendPrettyHexDump(StringBuilder dump, ByteBuf buf, int offset, int length) {
if (isOutOfBounds(offset, length, buf.capacity())) {
throw new IndexOutOfBoundsException(
"expected: " + "0 <= offset(" + offset + ") <= offset + length(" + length
+ ") <= " + "buf.capacity(" + buf.capacity() + ')');
}
if (length == 0) {
return;
}
dump.append(
" +-------------------------------------------------+" +
NEWLINE + " | 0 1 2 3 4 5 6 7 8 9 a b c d e f |" +
NEWLINE + "+--------+-------------------------------------------------+----------------+");
final int startIndex = offset;
final int fullRows = length >>> 4;
final int remainder = length & 0xF;
// Dump the rows which have 16 bytes.
for (int row = 0; row < fullRows; row ++) {
int rowStartIndex = (row << 4) + startIndex;
// Per-row prefix.
appendHexDumpRowPrefix(dump, row, rowStartIndex);
// Hex dump
int rowEndIndex = rowStartIndex + 16;
for (int j = rowStartIndex; j < rowEndIndex; j ++) {
dump.append(BYTE2HEX[buf.getUnsignedByte(j)]);
}
dump.append(" |");
// ASCII dump
for (int j = rowStartIndex; j < rowEndIndex; j ++) {
dump.append(BYTE2CHAR[buf.getUnsignedByte(j)]);
}
dump.append('|');
}
// Dump the last row which has less than 16 bytes.
if (remainder != 0) {
int rowStartIndex = (fullRows << 4) + startIndex;
appendHexDumpRowPrefix(dump, fullRows, rowStartIndex);
// Hex dump
int rowEndIndex = rowStartIndex + remainder;
for (int j = rowStartIndex; j < rowEndIndex; j ++) {
dump.append(BYTE2HEX[buf.getUnsignedByte(j)]);
}
dump.append(HEXPADDING[remainder]);
dump.append(" |");
// Ascii dump
for (int j = rowStartIndex; j < rowEndIndex; j ++) {
dump.append(BYTE2CHAR[buf.getUnsignedByte(j)]);
}
dump.append(BYTEPADDING[remainder]);
dump.append('|');
}
dump.append(NEWLINE +
"+--------+-------------------------------------------------+----------------+");
}
private static void appendHexDumpRowPrefix(StringBuilder dump, int row, int rowStartIndex) {
if (row < HEXDUMP_ROWPREFIXES.length) {
dump.append(HEXDUMP_ROWPREFIXES[row]);
} else {
dump.append(NEWLINE);
dump.append(Long.toHexString(rowStartIndex & 0xFFFFFFFFL | 0x100000000L));
dump.setCharAt(dump.length() - 9, '|');
dump.append('|');
}
}
}
static final class ThreadLocalUnsafeDirectByteBuf extends UnpooledUnsafeDirectByteBuf {
private static final Recycler<ThreadLocalUnsafeDirectByteBuf> RECYCLER =
new Recycler<ThreadLocalUnsafeDirectByteBuf>() {
@Override
protected ThreadLocalUnsafeDirectByteBuf newObject(Handle<ThreadLocalUnsafeDirectByteBuf> handle) {
return new ThreadLocalUnsafeDirectByteBuf(handle);
}
};
static ThreadLocalUnsafeDirectByteBuf newInstance() {
ThreadLocalUnsafeDirectByteBuf buf = RECYCLER.get();
buf.setRefCnt(1);
return buf;
}
private final Handle<ThreadLocalUnsafeDirectByteBuf> handle;
private ThreadLocalUnsafeDirectByteBuf(Handle<ThreadLocalUnsafeDirectByteBuf> handle) {
super(UnpooledByteBufAllocator.DEFAULT, 256, Integer.MAX_VALUE);
this.handle = handle;
}
@Override
protected void deallocate() {
if (capacity() > THREAD_LOCAL_BUFFER_SIZE) {
super.deallocate();
} else {
clear();
handle.recycle(this);
}
}
}
static final class ThreadLocalDirectByteBuf extends UnpooledDirectByteBuf {
private static final Recycler<ThreadLocalDirectByteBuf> RECYCLER = new Recycler<ThreadLocalDirectByteBuf>() {
@Override
protected ThreadLocalDirectByteBuf newObject(Handle<ThreadLocalDirectByteBuf> handle) {
return new ThreadLocalDirectByteBuf(handle);
}
};
static ThreadLocalDirectByteBuf newInstance() {
ThreadLocalDirectByteBuf buf = RECYCLER.get();
buf.setRefCnt(1);
return buf;
}
private final Handle<ThreadLocalDirectByteBuf> handle;
private ThreadLocalDirectByteBuf(Handle<ThreadLocalDirectByteBuf> handle) {
super(UnpooledByteBufAllocator.DEFAULT, 256, Integer.MAX_VALUE);
this.handle = handle;
}
@Override
protected void deallocate() {
if (capacity() > THREAD_LOCAL_BUFFER_SIZE) {
super.deallocate();
} else {
clear();
handle.recycle(this);
}
}
}
/**
* Returns {@code true} if the given {@link ByteBuf} is valid text using the given {@link Charset},
* otherwise return {@code false}.
*
* @param buf The given {@link ByteBuf}.
* @param charset The specified {@link Charset}.
*/
public static boolean isText(ByteBuf buf, Charset charset) {
return isText(buf, buf.readerIndex(), buf.readableBytes(), charset);
}
/**
* Returns {@code true} if the specified {@link ByteBuf} starting at {@code index} with {@code length} is valid
* text using the given {@link Charset}, otherwise return {@code false}.
*
* @param buf The given {@link ByteBuf}.
* @param index The start index of the specified buffer.
* @param length The length of the specified buffer.
* @param charset The specified {@link Charset}.
*
* @throws IndexOutOfBoundsException if {@code index} + {@code length} is greater than {@code buf.readableBytes}
*/
public static boolean isText(ByteBuf buf, int index, int length, Charset charset) {
checkNotNull(buf, "buf");
checkNotNull(charset, "charset");
final int maxIndex = buf.readerIndex() + buf.readableBytes();
if (index < 0 || length < 0 || index > maxIndex - length) {
throw new IndexOutOfBoundsException("index: " + index + " length: " + length);
}
if (charset.equals(CharsetUtil.UTF_8)) {
return isUtf8(buf, index, length);
} else if (charset.equals(CharsetUtil.US_ASCII)) {
return isAscii(buf, index, length);
} else {
CharsetDecoder decoder = CharsetUtil.decoder(charset, CodingErrorAction.REPORT, CodingErrorAction.REPORT);
try {
if (buf.nioBufferCount() == 1) {
decoder.decode(buf.internalNioBuffer(index, length));
} else {
ByteBuf heapBuffer = buf.alloc().heapBuffer(length);
try {
heapBuffer.writeBytes(buf, index, length);
decoder.decode(heapBuffer.internalNioBuffer(heapBuffer.readerIndex(), length));
} finally {
heapBuffer.release();
}
}
return true;
} catch (CharacterCodingException ignore) {
return false;
}
}
}
/**
* Aborts on a byte which is not a valid ASCII character.
*/
private static final ByteProcessor FIND_NON_ASCII = new ByteProcessor() {
@Override
public boolean process(byte value) {
return value >= 0;
}
};
/**
* Returns {@code true} if the specified {@link ByteBuf} starting at {@code index} with {@code length} is valid
* ASCII text, otherwise return {@code false}.
*
* @param buf The given {@link ByteBuf}.
* @param index The start index of the specified buffer.
* @param length The length of the specified buffer.
*/
private static boolean isAscii(ByteBuf buf, int index, int length) {
return buf.forEachByte(index, length, FIND_NON_ASCII) == -1;
}
/**
* Returns {@code true} if the specified {@link ByteBuf} starting at {@code index} with {@code length} is valid
* UTF8 text, otherwise return {@code false}.
*
* @param buf The given {@link ByteBuf}.
* @param index The start index of the specified buffer.
* @param length The length of the specified buffer.
*
* @see
* <a href=http://www.ietf.org/rfc/rfc3629.txt>UTF-8 Definition</a>
*
* <pre>
* 1. Bytes format of UTF-8
*
* The table below summarizes the format of these different octet types.
* The letter x indicates bits available for encoding bits of the character number.
*
* Char. number range | UTF-8 octet sequence
* (hexadecimal) | (binary)
* --------------------+---------------------------------------------
* 0000 0000-0000 007F | 0xxxxxxx
* 0000 0080-0000 07FF | 110xxxxx 10xxxxxx
* 0000 0800-0000 FFFF | 1110xxxx 10xxxxxx 10xxxxxx
* 0001 0000-0010 FFFF | 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
* </pre>
*
* <pre>
* 2. Syntax of UTF-8 Byte Sequences
*
* UTF8-octets = *( UTF8-char )
* UTF8-char = UTF8-1 / UTF8-2 / UTF8-3 / UTF8-4
* UTF8-1 = %x00-7F
* UTF8-2 = %xC2-DF UTF8-tail
* UTF8-3 = %xE0 %xA0-BF UTF8-tail /
* %xE1-EC 2( UTF8-tail ) /
* %xED %x80-9F UTF8-tail /
* %xEE-EF 2( UTF8-tail )
* UTF8-4 = %xF0 %x90-BF 2( UTF8-tail ) /
* %xF1-F3 3( UTF8-tail ) /
* %xF4 %x80-8F 2( UTF8-tail )
* UTF8-tail = %x80-BF
* </pre>
*/
private static boolean isUtf8(ByteBuf buf, int index, int length) {
final int endIndex = index + length;
while (index < endIndex) {
byte b1 = buf.getByte(index++);
byte b2, b3, b4;
if ((b1 & 0x80) == 0) {
// 1 byte
continue;
}
if ((b1 & 0xE0) == 0xC0) {
// 2 bytes
//
// Bit/Byte pattern
// 110xxxxx 10xxxxxx
// C2..DF 80..BF
if (index >= endIndex) { // no enough bytes
return false;
}
b2 = buf.getByte(index++);
if ((b2 & 0xC0) != 0x80) { // 2nd byte not starts with 10
return false;
}
if ((b1 & 0xFF) < 0xC2) { // out of lower bound
return false;
}
} else if ((b1 & 0xF0) == 0xE0) {
// 3 bytes
//
// Bit/Byte pattern
// 1110xxxx 10xxxxxx 10xxxxxx
// E0 A0..BF 80..BF
// E1..EC 80..BF 80..BF
// ED 80..9F 80..BF
// E1..EF 80..BF 80..BF
if (index > endIndex - 2) { // no enough bytes
return false;
}
b2 = buf.getByte(index++);
b3 = buf.getByte(index++);
if ((b2 & 0xC0) != 0x80 || (b3 & 0xC0) != 0x80) { // 2nd or 3rd bytes not start with 10
return false;
}
if ((b1 & 0x0F) == 0x00 && (b2 & 0xFF) < 0xA0) { // out of lower bound
return false;
}
if ((b1 & 0x0F) == 0x0D && (b2 & 0xFF) > 0x9F) { // out of upper bound
return false;
}
} else if ((b1 & 0xF8) == 0xF0) {
// 4 bytes
//
// Bit/Byte pattern
// 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
// F0 90..BF 80..BF 80..BF
// F1..F3 80..BF 80..BF 80..BF
// F4 80..8F 80..BF 80..BF
if (index > endIndex - 3) { // no enough bytes
return false;
}
b2 = buf.getByte(index++);
b3 = buf.getByte(index++);
b4 = buf.getByte(index++);
if ((b2 & 0xC0) != 0x80 || (b3 & 0xC0) != 0x80 || (b4 & 0xC0) != 0x80) {
// 2nd, 3rd or 4th bytes not start with 10
return false;
}
if ((b1 & 0xFF) > 0xF4 // b1 invalid
|| (b1 & 0xFF) == 0xF0 && (b2 & 0xFF) < 0x90 // b2 out of lower bound
|| (b1 & 0xFF) == 0xF4 && (b2 & 0xFF) > 0x8F) { // b2 out of upper bound
return false;
}
} else {
return false;
}
}
return true;
}
Revamp the core API to reduce memory footprint and consumption The API changes made so far turned out to increase the memory footprint and consumption while our intention was actually decreasing them. Memory consumption issue: When there are many connections which does not exchange data frequently, the old Netty 4 API spent a lot more memory than 3 because it always allocates per-handler buffer for each connection unless otherwise explicitly stated by a user. In a usual real world load, a client doesn't always send requests without pausing, so the idea of having a buffer whose life cycle if bound to the life cycle of a connection didn't work as expected. Memory footprint issue: The old Netty 4 API decreased overall memory footprint by a great deal in many cases. It was mainly because the old Netty 4 API did not allocate a new buffer and event object for each read. Instead, it created a new buffer for each handler in a pipeline. This works pretty well as long as the number of handlers in a pipeline is only a few. However, for a highly modular application with many handlers which handles connections which lasts for relatively short period, it actually makes the memory footprint issue much worse. Changes: All in all, this is about retaining all the good changes we made in 4 so far such as better thread model and going back to the way how we dealt with message events in 3. To fix the memory consumption/footprint issue mentioned above, we made a hard decision to break the backward compatibility again with the following changes: - Remove MessageBuf - Merge Buf into ByteBuf - Merge ChannelInboundByte/MessageHandler and ChannelStateHandler into ChannelInboundHandler - Similar changes were made to the adapter classes - Merge ChannelOutboundByte/MessageHandler and ChannelOperationHandler into ChannelOutboundHandler - Similar changes were made to the adapter classes - Introduce MessageList which is similar to `MessageEvent` in Netty 3 - Replace inboundBufferUpdated(ctx) with messageReceived(ctx, MessageList) - Replace flush(ctx, promise) with write(ctx, MessageList, promise) - Remove ByteToByteEncoder/Decoder/Codec - Replaced by MessageToByteEncoder<ByteBuf>, ByteToMessageDecoder<ByteBuf>, and ByteMessageCodec<ByteBuf> - Merge EmbeddedByteChannel and EmbeddedMessageChannel into EmbeddedChannel - Add SimpleChannelInboundHandler which is sometimes more useful than ChannelInboundHandlerAdapter - Bring back Channel.isWritable() from Netty 3 - Add ChannelInboundHandler.channelWritabilityChanges() event - Add RecvByteBufAllocator configuration property - Similar to ReceiveBufferSizePredictor in Netty 3 - Some existing configuration properties such as DatagramChannelConfig.receivePacketSize is gone now. - Remove suspend/resumeIntermediaryDeallocation() in ByteBuf This change would have been impossible without @normanmaurer's help. He fixed, ported, and improved many parts of the changes.
2013-05-28 13:40:19 +02:00
private ByteBufUtil() { }
}