Motivation:
Sometimes, we wish to operate on both buffers and anything that can produce a buffer.
For instance, when making a composite buffer, we could compose either buffers or sends.
Modification:
Introduce a Deref interface, which is extended by both Rc and Send.
A Deref can be used to acquire an Rc instance, and in doing so will also acquire a reference to the Rc.
That is, dereferencing increases the reference count.
For Rc itself, this just delegates to Rc.acquire, while for Send it delegates to Send.receive, and can only be called once.
The Allocator.compose method has been changed to take Derefs.
This allows us to compose either Bufs or Sends of bufs.
Or a mix.
Extra care and caution has been added to the code, to make sure the reference counts are managed correctly when composing buffers, now that it's a more complicated operation.
A handful of convenience methods for working with Sends have also been added to the Send interface.
Result:
We can now build a composite buffer out of sends of buffers.
Motivation:
The forEachReadable/Writable permit a cleaner FileCopyExample implementation.
Modification:
Simplify FileCopyExample.
Also add examples of various good and bad ways to transfer buffer ownership between threads.
Update the forEachReadable/Writable APIs to let exceptions pass through.
Result:
Cleaner code and more useful forEachReadable/Writable APIs.
Motivation:
There is no reason that composite buffers should nest when composed.
Instead, when composite buffers are used to compose or extend other composite buffers, we should unwrap them and copy the references to their constituent buffers.
Modification:
Composite buffers now always unwrap and flatten themselves when they participate in composition or extension of other composite buffers.
Result:
Composite buffers are now always guaranteed* to contain a single level of non-composed leaf buffers.
*assuming no other unknown buffer-wrapping buffer type is in the mix.
Motivation:
It's desirable to be able to access the contents of a Buf via an array or a ByteBuffer.
However, we would also like to have a unified API that works for both composite and non-composite buffers.
Even for nested composite buffers.
Modification:
Add a forEachReadable method, which uses internal iteration to process all buffer components.
The internal iteration allows us to hide any nesting of composite buffers.
The consumer in the internal iteration is presented with a Component object, which exposes the contents in various ways.
The data is exposed from the Component via methods, such that anything that is expensive to create, will not have to be paid for unless it is used.
This mechanism also let us avoid any allocation unnecessary allocation; the ByteBuffers and arrays will necessarily have to be allocated, but the consumer may or may not need allocation depending on how it's implemented, and the component objects do not need to be allocated, because the non-composite buffers can directly implement the Component interface.
Result:
It's now possible to access the contents of Buf instances as arrays or ByteBuffers, without having to copy the data.
Motivation:
There are cases where you want a buffer to be "constant."
Buffers are inherently mutable, but it's possible to block off write access to the buffer contents.
This doesn't make it completely safe to share the buffer across multiple threads, but it does catch most races that could occur.
Modification:
Add a method to Buf for toggling read-only mode.
When a buffer is read-only, the write accessors throw exceptions when called.
In the MemSegBuf, this is implemented by having separate read and write references to the underlying memory segment.
In a read-only buffer, the write reference is redirected to point to a closed memory segment, thus preventing all writes to the memory backing the buffer.
Result:
It is now possible to make buffers read-only.
Note, however, that it is also possible to toggle a read-only buffer back to writable.
We need that in order for buffer pools to be able to fully reset the state of a buffer, regardless of the buffer implementation.
Motivation:
The main use case with Buf.compact is in conjunction with ensureWritable.
It turns out we can get a simpler API, and faster methods, by combining those two operations, because it allows us to relax some guarantees and skip some steps in certain cases, which wouldn't be as neat or clean if they were two separate steps.
Modification:
Add a new Buf.ensureWritable method, which takes an allowCompaction argument.
In MemSegBuf, we can just delegate to compact() when applicable.
In CompositeBuf, we can sometimes get away with just reorganising the bufs array.
Result:
We can now do ensureWritable without allocating in some cases, and this can in particular make the operation faster for CompositeBuf.
Motivation:
Compaction makes more space available at the end of a buffer, by discarding bytes at the beginning that have already been processed.
Modification:
Add a copying compact method to Buf.
Result:
It is now possible to discard read bytes by calling `compact()`.
Motivation:
There are many use cases where other objects will have fields that are buffers.
Since buffers are reference counted, their life cycle needs to be managed carefully.
Modification:
Add the abstract BufHolder, and the concrete sub-class BufRef, as neat building blocks for building other classes that contain field references to buffers.
The behaviours of closed/sent buffers have also been specified in tests, and tightened up in the code.
Result:
It is now easier to create classes/objects that wrap buffers.
Motivation:
There are use cases that involve accumulating data into a buffer, then carving out prefix slices and sending them off on their own journey for further processing.
Modification:
Add a Buf.bifurcate API, that split a buffer, and its ownership, in two.
Internally, the API will inject and maintain an atomically reference counted Drop instance, so that the original memory segment is not released until all bifurcated parts are closed.
This works particularly well for composite buffers, where only the buffer (if any) wherein the bifurcation point lands, will actually have its memory split. A composite buffer can otherwise just crack its buffer array in two.
Result:
We now have a safe way of breaking the single ownership of some memory into multiple parts, that can be sent and owned independently.
Motivation:
Cursors are better than iterators in that they only need to check boundary conditions once per iteration, when processed in a loop.
This should make them easier for the compiler to optimise.
Modification:
Change the ByteIterator to a ByteCursor. The API is almost the same, but with a few subtle differences in semantics.
The primary difference is that the cursor movement and boundary condition checking and position movement happen at the same time, and do not need to occur when the values are fetched out of the cursor.
An iterator, on the other hand, needs to throw an exception if "next" is called too many times.
Result:
Simpler code, and hopefully faster code as well.
Motivation:
Composite buffers are uniquely positioned to be able to extend their underlying storage relatively cheaply.
This fact is relied upon in a couple of buffer use cases within Netty, that we wish to support.
Modification:
Add a static `extend` method to Allocator, so that the CompositeBuf class can remain internal.
The `extend` method inserts the extension buffer at the end of the composite buffer as if it had been included from the start.
This involves checking offsets and byte order invariants.
We also require that the composite buffer be in an owned state.
Result:
It's now possible to extend a composite buffer with a specific buffer, after the composite buffer has been created.
Motivation:
When a build fails, it's desirable to have the build artifacts available
so the build failure can be inspected, investigated and hopefully fixed.
Modification:
Change the Makefile and CI workflow such that the build artifacts are
captured and uploaded when the build fails.
Result:
A "target.zip" file is now available for download on failed GitHub
Actions builds.