Motivation:
There are lots of imports which are unused. We should get rid of them to make the code look better,
Modification:
Removed unused imports.
Result:
No unused imports.
Add an ByteBuf -> Buffer adaptor
Motivation:
For migration of APIs from `ByteBuf` to `Buffer` sometime we may have to bridge APIs which use a `ByteBuf` (eg: `ByteToMessageDecoder` at the moment) to APIs that use a `Buffer` even when the allocator for the `ByteBuf` isn't migrated to use `ByteBufAllocatorAdaptor`.
Modification:
- Add a simple copy adaptor that copies all data from `ByteBuf` to a new `Buffer`.
- I noticed we do not have a `writeBytes` method with offsets, so added that too.
Result:
One more adaptor to bridge old and new buffer APIs.
Motivation:
In Netty 5 we wish to have a simpler, safe, future proof, and more consistent buffer API.
We developed such an API in the incubating buffer repository, and taking it through multiple rounds of review and adjustments.
This PR/commit bring the results of that work into the Netty 5 branch of the main Netty repository.
Modifications:
* `Buffer` is an interface, and all implementations are hidden behind it.
There is no longer an inheritance hierarchy of abstract classes and implementations.
* Reference counting is gone.
After a buffer has been allocated, calling `close` on it will deallocate it.
It is then up to users and integrators to ensure that the life-times of buffers are managed correctly.
This is usually not a problem as buffers tend to flow through the pipeline to be released after a terminal IO operation.
* Slice and duplicate methods are replaced with `split`.
By removing slices, duplicate, and reference counting, there is no longer a possibility that a buffer and/or its memory can be shared and accessible through multiple routes.
This solves the problem of data being accessed from multiple places in an uncoordinated way, and the problem of buffer memory being closed while being in use by some unsuspecting piece of code.
Some adjustments will have to be made to other APIs, idioms, and usages, since `split` is not always a replacement for `slice` in some use cases.
* The `split` has been added which allows memory to be shared among multiple buffers, but in non-overlapping regions.
When the memory regions don't overlap, it will not be possible for the different buffers to interfere with each other.
An internal, and completely transparent, reference counting system ensures that the backing memory is released once the last buffer view is closed.
* A Send API has been introduced that can be used to enforce (in the type system) the transfer of buffer ownership.
This is not expected to be used in the pipeline flow itself, but rather for other objects that wrap buffers and wish to avoid becoming "shared views" — the absence of "shared views" of memory is important for avoiding bugs in the absence of reference counting.
* A new BufferAllocator API, where the choice of implementation determines factors like on-/off-heap, pooling or not.
How access to the different allocators will be exposed to integrators will be decided later.
Perhaps they'll be directly accessible on the `ChannelHandlerContext`.
* The `PooledBufferAllocator` has been copied and modified to match the new allocator API.
This includes unifying its implementation that was previously split across on-heap and off-heap.
* The `PooledBufferAllocator` implementation has also been adjusted to allocate 4 MiB chunks by default, and a few changes have been made to the implementation to make a newly created, empty allocator use significantly less heap memory.
* A `Resource` interface has been added, which defines the life-cycle methods and the `send` method.
The `Buffer` interface extends this.
* Analogues for `ByteBufHolder` has been added in the `BufferHolder` and `BufferRef` classes.
* `ByteCursor` is added as a new way to iterate the data in buffers.
The byte cursor API is designed to be more JIT friendly than an iterator, or the existing `ByteProcessor` interface.
* `CompositeBuffer` no longer permit the same level of access to its internal components.
The composite buffer enforces its ownership of its components via the `Send` API, and the components can only be individually accessed with the `forEachReadable` and `forEachWritable` methods.
This keeps the API and behavioral differences between composite and non-composite buffers to a minimum.
* Two implementations of the `Buffer` interface are provided with the API: One based on `ByteBuffer`, and one based on `sun.misc.Unsafe`.
The `ByteBuffer` implementation is used by default.
More implementations can be loaded from the classpath via service loading.
The `MemorySegment` based implementation is left behind in the incubator repository.
* An extensive and highly parameterised test suite has been added, to ensure that all implementations have consistent and correct behaviour, regardless of their configuration or composition.
Result:
We have a new buffer API that is simpler, better tested, more consistent in behaviour, and safer by design, than the existing `ByteBuf` API.
The next legs of this journey will be about integrating this new API into Netty proper, and deprecate (and eventually remove) the `ByteBuf` API.
This fixes#11024, #8601, #8543, #8542, #8534, #3358, and #3306.
Use Two way algorithm to optimize ByteBufUtil.indexOf() method
Motivation:
ByteBufUtil.indexOf can be inefficient for substring search on
ByteBuf, in terms of algorithm complexity (O(needle.readableBytes * haystack.readableBytes)), consider using the Two Way algorithm to optimize the ByteBufUtil.indexOf() method
Modification:
Use the Two Way algorithm to optimize ByteBufUtil.indexOf() method.
Result:
The performance of the ByteBufUtil.indexOf() method is higher than the original implementation
Motivation:
b89a807d15 moved the buffer tests to junit5 but introduced a small error which could lead to test-failure
Modifications:
Correctly override the method and assert that super throws (as we can not expand the buffer).
Result:
No more test failures
Motivation:
JUnit 5 is more expressive, extensible, and composable in many ways, and it's better able to run tests in parallel.
Modifications:
Use JUnit5 in tests
Result:
Related to https://github.com/netty/netty/issues/10757
Motivation:
When object-references are both same, the method should return 0 directly with no necessary go loop&compare the content of the ByteBuf.
Modification:
Added short circuit when both object-references are the same for equals and compare methods.
Result:
Added short circuit code.
Motivation:
NullChecks resulting in a NullPointerException or IllegalArgumentException, numeric ranges (>0, >=0) checks, not empty strings/arrays checks must never be anonymous but with the parameter or variable name which is checked. They must be specific and should not be done with an "OR-Logic" (if a == null || b == null) throw new NullPointerEx.
Modifications:
* import static relevant checks
* Replace manual checks with ObjectUtil methods
Result:
All checks needed are done with ObjectUtil, some exception texts are improved.
Fixes#11170
Motivation:
Switch statements should always have a default block to ensure we not "fall-through" by mistake.
Modification:
Add default block
Result:
code cleanup.
Signed-off-by: xingrufei <xingrufei@sogou-inc.com>
Co-authored-by: xingrufei <xingrufei@sogou-inc.com>
Motivation:
Alignment handling was broken, and basically turned into a fixed offset into each allocation address regardless of its initial value, instead of ensuring that the allocated address is either aligned or bumped to the nearest alignment offset.
The brokenness of the alignment handling extended so far, that overlapping ByteBuf instances could even be created, as was seen in #11101.
Modification:
Instead of fixing the per-allocation pointer bump, we now ensure that 1) the minimum page size is a whole multiple of the alignment, and 2) the reference memory for each chunk is bumped to the nearest aligned address, and finally 3) ensured that the reservations are whole multiples of the alignment, thus ensuring that the next allocation automatically occurs from an aligned address.
Incidentally, (3) above comes for free because the reservations are in whole pages, and in (1) we ensured that pages are sized in whole multiples of the alignment.
In order to ensure that the memory for a chunk is aligned, we introduce some new PlatformDependent infrastructure.
The PlatformDependent.alignDirectBuffer will produce a slice of the given buffer, and the slice will have an address that is aligned.
This method is plainly available on ByteBuffer in Java 9 onwards, but for pre-9 we have to use Unsafe, which means it can fail and might not be available on all platforms.
Attempts to create a PooledByteBufAllocator that uses alignment, when this is not supported, will throw an exception.
Luckily, I think use of aligned allocations are rare.
Result:
Aligned pooled byte bufs now work correctly, and never have any overlap.
Fixes#11101
Motivation:
Components in a composite buffer can "go missing" if the composite is a slice of another composite and the parent has changed its layout.
Modification:
Where we would previously have thrown a NullPointerException, we now have a null-check for the component, and we instead throw an IllegalStateException with a more descriptive message.
Result:
It's now a bit easier to understand what is going on in these situations.
Fixes#10908
Motivation:
To make it possible to experiment with alternative buffer implementations, we need a way to abstract away the concrete buffers used throughout most of the Netty pipelines, while still having a common currency for doing IO in the end.
Modification:
- Introduce an ByteBufConvertible interface, that allow arbitrary objects to convert themselves into ByteBuf objects.
- Every place in the code, where we did an instanceof check for ByteBuf, we now do an instanceof check for ByteBufConvertible.
- ByteBuf itself implements ByteBufConvertible, and returns itself from the asByteBuf method.
Result:
It is now possible to use Netty with alternative buffer implementations, as long as they can be converted to ByteBuf.
This has been verified elsewhere, with an alternative buffer implementation.
Motivation:
File.createTempFile(String, String)` will create a temporary file in the system temporary directory if the 'java.io.tmpdir'. The permissions on that file utilize the umask. In a majority of cases, this means that the file that java creates has the permissions: `-rw-r--r--`, thus, any other local user on that system can read the contents of that file.
This can be a security concern if any sensitive data is stored in this file.
This was reported by Jonathan Leitschuh <jonathan.leitschuh@gmail.com> as a security problem.
Modifications:
Use Files.createTempFile(...) which will use safe-defaults when running on java 7 and later. If running on java 6 there isnt much we can do, which is fair enough as java 6 shouldnt be considered "safe" anyway.
Result:
Create temporary files with sane permissions by default.
Motivation:
ReadOnlyByteBuf and ReadOnlyByteBuffer are not writable, but their writableBytes
related methods return non-zero values. This is inconsistent with the behavior
of these buffer types.
Modifications:
- ReadOnlyByteBuf and ReadOnlyByteBuffer writableBytes related methods should
return 0
Result:
More correct ReadOnlyByteBuf and ReadOnlyByteBuffer behavior with respect to
writability.
Motivation:
when customer need large of 'byteBuf.capacity' in [7168, 8192], the size of 'chunk.subpages' may be inflated when large of byteBuf be released, not consistent with other 'byteBuf.capacity'
Modification:
when maxNumElems == 1 need consider remove from pool
Result:
Fixes#10896.
Co-authored-by: zxingy <zxingy@servyou.com.cn>
Motivation:
Found an invalid comment in UnpooledDirectByteBuf.
Modification:
Fixed a comment in UnpooledDirectByteBuf.
Result:
Fixed a comment in UnpooledDirectByteBuf.
Motivation:
We rely on this functionality in PoolChunk, and a bug was caught by a non-deterministic test failure
Modification:
Went back to the Algorithms book, and reimplemented remove() the way it was meant to.
Result:
No test failures after 200.000 runs, so we have some confidence the code is correct now.
Motivation:
The uncached access to PoolChunk can be made faster, and avoid allocating boxed Longs, if we have a primitive hash map and priority queue implementation for it.
Modification:
Add bespoke primitive implementations of a hash map and a priority queue for PoolChunk.
Remove all the long-boxing caused by the previous implementation.
The hashmap is a linear probing map with a fairly short probe that keeps the search within a couple of cache lines.
The priority queue is the same binary heap algorithm that's described in Algorithms by Sedgewick and Wayne.
The implementation avoids the Long boxing by relying on a long[] array.
This makes the internal-remove method faster, which is an important operation in PoolChunk.
Result:
Roughly 13% performance uplift in buffer allocations that miss cache.
Motivation:
https://github.com/netty/netty/pull/10267 introduced a change that reduced the fragmentation. Unfortunally it also introduced a regression when it comes to caching of normal allocations. This can have a negative performance impact depending on the allocation sizes.
Modifications:
- Fix algorithm to calculate the array size for normal allocation caches
- Correctly calculate indeox for normal caches
- Add unit test
Result:
Fixes https://github.com/netty/netty/issues/10805
Motivation:
Passing a null value of byte[] to the `Unsafe.copyMemory(xxx)` would cause the JVM crash
Modification:
Add null checking before calling `PlatformDependent.copyMemory(src, xxx)`
Result:
Fixes#10791 .
Motivation:
PoolChunk maintains multiple PriorityQueue<Long> collections. The usage
of PoolChunk#removeAvailRun unboxes the Long values to long, and then
this method uses queue.remove(..) which will auto box the value back to
Long. This creates unnecessary allocations via Long.valueOf(long).
Modifications:
- Adjust method signature and usage of PoolChunk#removeAvailRun to avoid
boxing
Result:
Less allocations as a result of PoolChunk#removeAvailRun.
Motivation:
There is no need for ByteProcessor to throw a checked exception.
The declared checked exception causes unnecessary code complications just to propagate it.
This can be cleaned up.
Modification:
ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified.
Result:
Simpler code.
Motivation:
There is no need for ByteProcessor to throw a checked exception.
The declared checked exception causes unnecessary code complications just to propagate it.
This can be cleaned up.
Modification:
ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified.
Result:
Simpler code.
Motivation:
Some buffers implement ByteBuf#order(order) by wrapping themselves in a SwappedByteBuf.
The SwappedByteBuf is then responsible for swapping the byte order on accesses.
The explicitly little-endian accessor methods, however, should not be swapped to big-endian, but instead remain explicitly little-endian.
Modification:
The SwappedByteBuf was passing through calls to e.g. writeIntLE, to the big-endian equivalent, e.g. writeInt.
This has been changed so that these calls delegate to their explicitly little-endian counterpart.
Result:
This makes all buffers that make use of SwappedByteBuf for their endian-ness configuration, consistent with all the buffers that use other implementation strategies.
In the end, all buffers now behave exactly the same, when using their explicitly little-endian accessor methods.
Motivation:
HTTP is a plaintext protocol which means that someone may be able
to eavesdrop the data. To prevent this, HTTPS should be used whenever
possible. However, maintaining using https:// in all URLs may be
difficult. The nohttp tool can help here. The tool scans all the files
in a repository and reports where http:// is used.
Modifications:
- Added nohttp (via checkstyle) into the build process.
- Suppressed findings for the websites
that don't support HTTPS or that are not reachable
Result:
- Prevent using HTTP in the future.
- Encourage users to use HTTPS when they follow the links they found in
the code.
Motivation:
junit deprecated Assert.assertThat(...)
Modifications:
Use MatcherAssert.assertThat(...) as replacement for deprecated method
Result:
Less deprecation warnings
Motivation:
LGTM reports multiple issues. They need to be triaged,
and real ones should be fixed.
Modifications:
- Fixed multiple issues reported by LGTM, such as redundant conditions,
resource leaks, typos, possible integer overflows.
- Suppressed false-positives.
- Added a few testcases.
Result:
Fixed several possible issues, get rid of false alarms in the LGTM report.
Motivation:
As the PooledByteBufAllocator is a critical part of netty we should ensure it works as expected.
Modifications:
- Add a few more asserts to ensure we not see any corrupted state
- Null out slot in the subpage array once the subpage was freed and removed from the pool
- Merge methods into constructor as it was only called from the constructor anyway.
Result:
Code cleanup
Motivation:
- To make ensureWritable throw IOOBE when maxCapacity is exceeded, even if
the requested new capacity would overflow Integer.MAX_VALUE
Modification:
- AbstractByteBuf.ensureWritable0 is modified to detect when
targetCapacity has wrapped around
- Test added for correct behaviour in AbstractByteBufTest
Result:
- Calls to ensureWritable will always throw IOOBE when maxCapacity is
exceeded (and bounds checking is enabled)
Motivation:
writeUtf8 can suffer from inlining issues and/or megamorphic call-sites on the hot path due to ByteBuf hierarchy
Modifications:
Duplicate and specialize the code paths to reduce the need of polymorphic calls
Result:
Performance are more stable in user code
Motivation:
If ByteBufUtil.getBytes() is called with copy=false, it does not
correctly check that the underlying array can be shared in some cases.
In particular:
* It does not check that the arrayOffset() is zero. This causes it to
incorrectly return the underlying array if the other conditions are
met. The returned array will be longer than requested, with additional
unwanted bytes at its start.
* It assumes that the capacity() of the ByteBuf is equal to the backing
array length. This is not true for some types of ByteBuf, such as
PooledHeapByteBuf. This causes it to incorrectly return the underlying
array if the other conditions are met. The returned array will be
longer than requested, with additional unwanted bytes at its end.
Modifications:
This commit fixes the two bugs by:
* Checking that the arrayOffset() is zero before returning the
underlying array.
* Comparing the requested length to the underlying array's length,
rather than the ByteBuf's capacity, before returning the underlying
array.
This commit also adds a series of test cases for ByteBufUtil.getBytes().
Result:
ByteBufUtil.getBytes() now correctly checks whether the underlying array
can be shared or not.
The test cases will ensure the bug is not reintroduced in the future.
Motivation
This is used solely for the DataOutput#writeUTF8() method, which may
often not be used.
Modifications
Lazily construct the contained DataOutputStream in ByteBufOutputStream.
Result
Saves an allocation in some common cases
Motivation
ByteBuf has an isAccessible method which was introduced as part of ref
counting optimizations but there are some places still doing
accessibility checks by accessing the volatile refCnt() directly.
Modifications
- Have PooledNonRetained(Duplicate|Sliced)ByteBuf#isAccessible() use
their refcount delegate's isAccessible() method
- Add static isAccessible(buf) and ensureAccessible(buf) methods to
ByteBufUtil
(since ByteBuf#isAccessible() is package-private)
- Adjust DefaultByteBufHolder and similar classes to use these methods
rather than access refCnt() directly
Result
- More efficient accessibility checks in more places
Motivation:
We shouldn't call incSmallAllocation() in a synchronized block as its backed by a concurrent datastructure
Modifications:
Move call of incSmallAllocation() out of synchronized block
Result:
Minimize scope of synchronized block
Motivation:
For size from 512 bytes to chunkSize, we use a buddy algorithm. The
drawback is that it has a large internal fragmentation.
Modifications:
1. add SizeClassesMetric and SizeClasses
2. remove tiny size, now we have small, normal and huge size
3. rewrite the structure of PoolChunk
4. rewrite pooled allocate algorithm in PoolChunk
5. when allocate subpage, using lowest common multiple of pageSize and
elemSize instead of pageSize.
6. add more tests in PooledByteBufAllocatorTest and PoolArenaTest
Result:
Reduce internal fragmentation and closes#3910
Motivation:
We should include as much details as possible when throwing an IllegalArgumentException because of overflow in CompositeByteBuf
Modifications:
Add more details and factor out check into a static method to share code
Result:
Make it more clear why an operations failed
Motivation
An NPE was reported in #10245, caused by a regression introduced in
#8939. This in particular affects ByteToMessageDecoders that use the
COMPOSITE_CUMULATOR.
Modification
- Unwrap WrappedCompositeByteBufs passed to
CompositeByteBuf#addFlattenedComponents(...) method before accessing
internal components field
- Extend unit test to cover this case and ensure more of the
CompositeByteBuf tests are also run on the wrapped variant
Results
Fixes#10245
Motivation:
We can make use of our optimized implementations for UTF-8 and US-ASCII if the user request a copy of a sequence for these charsets
Modifications:
- Add fastpath implementation
- Add unit tests
Result:
Fixes https://github.com/netty/netty/issues/10205
Motivation:
We need to ensure we not overflow when adding buffers to a CompositeByteBuf
Modifications:
- Correctly validate overflow before adding to the internal storage
- Add testcase
Result:
Fixes https://github.com/netty/netty/issues/10194
Motivation:
We have found out that ByteBufUtil.indexOf can be inefficient for substring search on
ByteBuf, both in terms of algorithm complexity (worst case O(needle.readableBytes *
haystack.readableBytes)), and in constant factor (esp. on Composite buffers).
With implementation of more performant search algorithms we have seen improvements on
the order of magnitude.
Modifications:
This change introduces three search algorithms:
1. Knuth Morris Pratt - classical textbook algorithm, a good default choice.
2. Bit mask based algorithm - stable performance on any input, but limited to maximum
search substring (the needle) length of 64 bytes.
3. Aho–Corasick - worse performance and higher memory consumption than [1] and [2], but
it supports multiple substring (the needles) search simultaneously, by inspecting every
byte of the haystack only once.
Each algorithm processes every byte of underlying buffer only once, they are implemented
as ByteProcessor.
Result:
Efficient search algorithms with linear time complexity available in Netty (I will share
benchmark results in a comment on a PR).