Motivation:
Pooled buffers are a very important use case, and they change the cost dynamics around shared memory segments, so it's worth looking into in detail.
Modification:
Add another explicit close of pooled direct buffers to MemorySegmentClosedByCleanerBenchmark
Result:
Explicitly closing of pooled buffers is even out-performing cleaner close on the "heavy" workload, so this is currently the fastest way to run that workload:
Benchmark (workload) Mode Cnt Score Error Units
MemorySegmentClosedByCleanerBenchmark.cleanerClose heavy avgt 150 14,194 ± 0,558 us/op
MemorySegmentClosedByCleanerBenchmark.explicitClose heavy avgt 150 40,496 ± 0,414 us/op
MemorySegmentClosedByCleanerBenchmark.explicitPooledClose heavy avgt 150 12,723 ± 0,134 us/op
When send() a confined buffer, we had to first turn it into a shard buffer, so that it could be claimed by an arbitrary recipient thread.
As we've learned, however, closing shared segments is expensive.
We can speed up the send() call by simply leaving the segment shared.
This weakens the confinement of the received segment, though.
Currently no tests fails on that, but in the future we should re-implement confinement checking inside the Buf implementations themselves any, because pooled buffers also violate the confinement restriction, and we have a guiding principle that all buffers, regardless of implementation, should always behave the same.
The results of this change can be observed in the MemorySegmentClosedByCleanerBenchmark, with the heavy workload.
Explicitly closed segments now run the workload twice as fast, and the cleaner based closing is now 3 times faster.
Before:
Benchmark (workload) Mode Cnt Score Error Units
MemorySegmentClosedByCleanerBenchmark.cleanerClose heavy avgt 150 42,221 ± 0,943 us/op
MemorySegmentClosedByCleanerBenchmark.explicitClose heavy avgt 150 65,215 ± 0,761 us/op
After:
Benchmark (workload) Mode Cnt Score Error Units
MemorySegmentClosedByCleanerBenchmark.cleanerClose heavy avgt 150 13,871 ± 0,544 us/op
MemorySegmentClosedByCleanerBenchmark.explicitClose heavy avgt 150 37,516 ± 0,426 us/op
Also schedule a build to run at 06:30 in the morning, every Monday.
This way, the JDK and Netty 5 snapshots will be updated in the build cache every week.
Motivation:
Buffers should always behave the same, regardless of their underlying implementation and how they are allocated.
Modification:
The SizeClassedMemoryPool did not properly reset the internal buffer state prior to reusing them.
The offsets, byte order, and contents are now cleared before a buffer is reused.
Result:
There is no way to observe externally whether a buffer was reused or not.
Motivation:
Because of the current dependency on snapshot versions of the Panama Foreign version of OpenJDK 16, this project is fairly involved to build.
Modification:
To make it easier for newcomers to build the binaries for this project, a docker-based build is added.
The docker image is constructed such that it contains a fresh snapshot build of the right fork of Java.
A make file has also been added, which encapsulates the common commands one would use for working with the docker build.
Result:
It is now easy for newcomers to make builds, and run tests, of this project, as long as they have a working docker installation.
Motivation:
Resource lifetime was not correctly handled.
Modification:
We cannot call drop(buf) on a pooled buffer in order to release its memory from within ensureWritable, because that will send() the buffer back to the pool, which implies closing the buffer instance.
Instead, ensureWritable has to always work with untethered memory, so new APIs are added to AllocatorControl for releasing untethered memory.
The implementation already existed, because it was used by NativeMemoryCleanerDrop.
Result:
Buf.ensureWritable no longer closes pooled buffers.
Motivation:
Having buffers that are able to expand to accommodate more data on demand is a great convenience.
Modification:
Composite and MemSeg buffers are now able to mutate their backing storage, to increase their capacity.
This required some tricky integration with allocators via AllocatorControl.
Basically, it's now possible to allocate memory that is NOT bound by any life time, so that it can be attached to the life time that already exists for the buffer being expanded.
Result:
Buffers can now be expanded via Buf.ensureWritable.
Motivation:
Slices should behave identical to normal and composite buffers in all but a very select few aspects related to ownership.
Modification:
Extend the test generation to also produce slice-versions of nearly all test cases. Both slices that cover the entire buffer, and slices that only cover a part of their parent buffer.
Also fix a handful of bugs that this uncovered.
Result:
Buffer slices are now tested much more thoroughly, and a few bugs were fixed.
Motivation:
There's a desire to be able to clearly tell from a method name, whether or not it updates the reader or writer offsets.
Modification:
The accessor methods that take offsets as arguments (and thus do not update reader or writer offsets) have now been changed to follow a get/set naming pattern.
The methods that *do* update reader and writer offsets are still following a read/write naming pattern.
Result:
This makes it even more clear, whether or not the relative offsets are updated or not.
Motivation:
With the number of primitive accessor methods reduced due to only having the configured byte order, it no longer makes sense to maintain the code generator.
Modification:
Delete Codegen.
Result:
Less code to maintain.
Motivation:
This reduces the number of test classes because we can express the same with parameterized tests in JUnit 5.
This also removes the strictly tree-shaped dependencies between the tests.
Modification:
Change the new buffer API tests to use JUnit 5 and AssertJ.
Result:
A single test for all buffer implementations.
Motivation:
We have the ability to iterate through the bytes in a buffer with the ByteIterator, but another important use case is being able to iterate through the bytes in reverse order.
Modification:
Add methods for iterating through the bytes in a buffer in reverse order. Also update the copyInto methods to make use of it. Also add a bit of missing javadocs, and argument checks.
Result:
We can also use ByteIterator for efficiently processing data within a buffer in reverse order.
Motivation:
We need a simple API to efficiently iterate a buffer.
We've used the ByteProcessor so far, and while its internal iteration API is simple, it looses some efficiency by forcing code to only consider one byte at a time.
Modification:
The ByteIterator fills the same niche as the ByteProcessor, but uses external iteration instead of internal iteration.
This allows integrators to control the pace of iteration, and it makes it possible to expose methods for consuming bytes in bulk; one long of 8 bytes at a time.
This makes it possible to use the iterator in SIMD-Within-A-Register, or SWAR, data processing algorithms.
Result:
We have a ByteIterator for efficiently processing data within a buffer.
Motivation:
Copy methods are useful for bulk moving data into more convenient locations for what comes next in a given context.
Modification:
Add bulk copyInto methods for copying regions of buffer contents into arrays, byte buffers, and other Buf instances.
Some of these implementations are not optimised at this point, however, since we're primarily concerned with getting the API right at this point, and implementation maturity comes later.
Result:
We can now bulk copy data from a Buf into other convenient forms.
Motivation:
We don't want to support buffers larger than what can be addressed with an int.
This ensures we won't run into trouble with the max IO size on various operating systems.
Motivation:
Reference counted objects may be stateful and cannot always be sent or transfer their ownership.
It's desirable that integrators can check whether or not an object can be sent.
Modification:
Add an Rc.isSendable method that returns true if the object can be sent, and false otherwise.
Implementors of the Rc interface, and extenders or RcSupport, can then implement whatever special logic they need for restricting sending in certain situations.
Result:
It's possible to test if an object supports send() or not in any given situation.
Motivation:
Composite buffers make it possible to combine the data from multiple buffers and present them as one, without copying the contents. Each access primitive has slightly higher overhead, though, so it is encouraged to make measurements before decided whether to compose or copy.
Modification:
A CompositeBuf implementation has been added, along with a Buf#compose factory method.
The composite buffer behaves exactly the same as non-composed buffers.
Motivation:
Slicing gives you a derived buffer. This is useful for sending along just the part of a buffer that has the relevant data, or to get a new buffer instance for the same data, but with independent read and write offsets.
Modification:
Add slice() methods to the Buf interface, and implement them for MemSegBuf.
Buffer slices increments the reference count of the parent buffer, which prevents the parent from being send()-able.
Slices are themselves also not send()-able.
This is because send() involves ownership transfer, while slicing is like lending out mutable borrows.
The send() capability returns to the parent buffer once all slices are closed.
Motivation:
Having method variants for explicit endians made the API too wide.
Modification:
Remove all LE/BE accessor method variants from the Buf API and implementation.
Result:
The Buf API is now simpler.
Motivation:
We'd like to separate the API and the implementation, so we can make other implementations in the future.
This will allow us to deliver the API changes without the new MemorySegment implementation.
Modification:
The MemoryManager interface abstracts away the implementation details of the concrete buffer type, and how to allocate and deallocate them.
Result:
Knowledge of MemorySegments are now confined to just the MemSegBuf and MemoryManager implementation classes.