Commit Graph

79 Commits

Author SHA1 Message Date
Chris Vest
2ce8c7dc18 Fix drop race with Cleaner
Motivation:
We were seeing rare test failures where a cleaner had raced to close a memory segment we were using or closing.
The cause was that a single MemorySegment ended up used in multiple Buf instances.
When the SizeClassedMemoryPool was closed, the memory segments could be disposed without closing the gate in the NativeMemoryCleanerDrop.
The gate is important because it prevents double-frees of the memory segment.

Modification:
The fix is to change how the SizeClassedMemoryPool is closed, such that it always releases memory by calling `close()` on its buffers, which in turn will close the gate. The program will then proceed through the SizeClassedMemoryPool.drop implementation, which in turn will observe that the allocator is closed, and *then* dispose of the memory.

Result:
We should hopefully not see any more random test failures, but if we do, they would at least indicate a different bug.
This particular one was mostly showing up inside the cleaner threads, which were ignoring the exception, but occasionally I think the race went the other way, causing a test failure.
2020-12-07 17:35:21 +01:00
Chris Vest
2f99ee64a4
Merge pull request #14 from netty/benchmark-send
Add a benchmark for Buf.send()
2020-12-04 21:18:46 +01:00
Chris Vest
0c40143f5f Fix license header years, and style updates 2020-12-04 18:48:06 +01:00
Chris Vest
80185abec4 Add a benchmark for Buf.send()
Motivation:
This will likely be a somewhat common operation, as buffers move between eventloop and worker threads, so it's important to have an understanding of how it performs.

Modification:
Add a benchmark that specifically targets the send() operation on buffers.

Result:
We got benchmark numbers that clearly show the cost of confinement transfer
2020-12-04 16:27:08 +01:00
Chris Vest
b0da25d888
Merge pull request #10 from netty/byte-itr-benchmark
Add benchmarks for ByteIterator
2020-12-02 15:14:18 +01:00
Chris Vest
6b7ea5f5cb Add benchmarks for ByteIterator
Motivation:
Capture the performance characteristics of this primitive for various buffer implementations.

Modification:
Add a benchmark that iterate 4KiB buffers forwards, and backwards, on various buffer implementations.

Result:
Another aspect of the implementation covered by benchmarks.
Turns out the composite iterators a somewhat slow.
2020-12-02 14:54:02 +01:00
Chris Vest
fcd97af4f9
Merge pull request #7 from netty/over-eager-cleaner
@chrisvest Capture build artifacts for failed builds
2020-12-01 14:58:06 +01:00
Chris Vest
e3c7f9b632 Capture build artifacts for failed builds
Motivation:
When a build fails, it's desirable to have the build artifacts available
so the build failure can be inspected, investigated and hopefully fixed.

Modification:
Change the Makefile and CI workflow such that the build artifacts are
captured and uploaded when the build fails.

Result:
A "target.zip" file is now available for download on failed GitHub
Actions builds.
2020-12-01 14:38:09 +01:00
Chris Vest
e039f6f7f5
Merge pull request #9 from netty/more-close-benchmarks
Add benchmark for closing pooled buffers
2020-12-01 11:46:42 +01:00
Chris Vest
4a409d2458 Add benchmark for closing pooled buffers
Motivation:
Pooled buffers are a very important use case, and they change the cost dynamics around shared memory segments, so it's worth looking into in detail.

Modification:
Add another explicit close of pooled direct buffers to MemorySegmentClosedByCleanerBenchmark

Result:
Explicitly closing of pooled buffers is even out-performing cleaner close on the "heavy" workload, so this is currently the fastest way to run that workload:

Benchmark                                                  (workload)  Mode  Cnt   Score   Error  Units
MemorySegmentClosedByCleanerBenchmark.cleanerClose              heavy  avgt  150  14,194 ± 0,558  us/op
MemorySegmentClosedByCleanerBenchmark.explicitClose             heavy  avgt  150  40,496 ± 0,414  us/op
MemorySegmentClosedByCleanerBenchmark.explicitPooledClose       heavy  avgt  150  12,723 ± 0,134  us/op
2020-12-01 11:15:44 +01:00
Chris Vest
89860b779a
Merge pull request #5 from netty/faster-send
Make Buf.send() faster
2020-11-26 13:59:47 +01:00
Chris Vest
a3f6ae6be8 Make Buf.send() faster
When send() a confined buffer, we had to first turn it into a shard buffer, so that it could be claimed by an arbitrary recipient thread.

As we've learned, however, closing shared segments is expensive.
We can speed up the send() call by simply leaving the segment shared.
This weakens the confinement of the received segment, though.
Currently no tests fails on that, but in the future we should re-implement confinement checking inside the Buf implementations themselves any, because pooled buffers also violate the confinement restriction, and we have a guiding principle that all buffers, regardless of implementation, should always behave the same.

The results of this change can be observed in the MemorySegmentClosedByCleanerBenchmark, with the heavy workload.
Explicitly closed segments now run the workload twice as fast, and the cleaner based closing is now 3 times faster.

Before:
Benchmark                                            (workload)  Mode  Cnt   Score   Error  Units
MemorySegmentClosedByCleanerBenchmark.cleanerClose        heavy  avgt  150  42,221 ± 0,943  us/op
MemorySegmentClosedByCleanerBenchmark.explicitClose       heavy  avgt  150  65,215 ± 0,761  us/op

After:
Benchmark                                            (workload)  Mode  Cnt   Score   Error  Units
MemorySegmentClosedByCleanerBenchmark.cleanerClose        heavy  avgt  150  13,871 ± 0,544  us/op
MemorySegmentClosedByCleanerBenchmark.explicitClose       heavy  avgt  150  37,516 ± 0,426  us/op
2020-11-26 11:31:23 +01:00
Chris Vest
34a58a763c
Merge pull request #4 from netty/benchmarks2
Add benchmarks examining the performance difference between explicitly closing buffers, and letting Cleaners close the buffers
2020-11-26 11:12:59 +01:00
Chris Vest
6364c4d170 Add a benchmark for examining the performance difference between explicitly closing memory segments, versus having them closed by cleaners 2020-11-26 10:37:14 +01:00
Chris Vest
f611d58a6e
Merge pull request #3 from netty/benchmarks
Add benchmarks for opening/closing shared/confined native/heap memory segments
2020-11-26 10:19:39 +01:00
Chris Vest
6078465721 Add a benchmark for opening and closing shared/confined native/heap memory segments 2020-11-24 10:56:22 +01:00
Chris Vest
cd9f84e856 The assertj-core dependency should only be available in test scope 2020-11-23 18:11:22 +01:00
Chris Vest
92c178ceb9 The BufTest.pooledBuffersMustResetStateBeforeReuse should run for all allocators 2020-11-23 18:10:58 +01:00
Chris Vest
eb7717b00a Move benchmarks to their own directory 2020-11-23 18:10:27 +01:00
Chris Vest
6e23ba139d
Merge pull request #2 from netty/cache-key
Include year and week number in build cache key
2020-11-23 12:00:43 +01:00
Chris Vest
5037d546e1 Include year and week number in build cache key
Also schedule a build to run at 06:30 in the morning, every Monday.
This way, the JDK and Netty 5 snapshots will be updated in the build cache every week.
2020-11-23 10:56:11 +01:00
Chris Vest
854a2a95dd Remove cruft from CI build workflow file 2020-11-21 21:40:57 +01:00
Chris Vest
c91478341b
Merge pull request #1 from netty/gh-workflow
Add a GitHub workflow for building PRs
2020-11-21 21:35:29 +01:00
Chris Vest
b171449de9 Try yet another different caching mechanism 2020-11-21 17:00:30 +01:00
Chris Vest
87d23f52db Try a different caching mechanism 2020-11-21 15:26:10 +01:00
Chris Vest
1f9ab72a44 Add more examples 2020-11-20 22:22:01 +01:00
Chris Vest
f3e494bce3 Add first example on how to use the new buffer API 2020-11-20 16:07:52 +01:00
Chris Vest
308b4df3b6 Try fixing multi-line workflow commands 2020-11-20 16:07:52 +01:00
Chris Vest
72eb5d3bcb Try adding a build cache that uses githubs package repo as a cache
Inspired by https://dev.to/dtinth/caching-docker-builds-in-github-actions-which-approach-is-the-fastest-a-research-18ei
2020-11-20 14:38:38 +01:00
Chris Vest
023bb64a25 Update MemSegBuf with the latest panama-foreign API changes 2020-11-20 14:01:14 +01:00
Chris Vest
1706df49b8
Add a GitHub workflow for building PRs 2020-11-20 12:52:31 +01:00
Chris Vest
a4ecc1b184 Add toString methods to the buffer implementations 2020-11-20 12:44:09 +01:00
Chris Vest
53d2e4b955 Pooled buffers must reset their state before reuse
Motivation:
Buffers should always behave the same, regardless of their underlying implementation and how they are allocated.

Modification:
The SizeClassedMemoryPool did not properly reset the internal buffer state prior to reusing them.
The offsets, byte order, and contents are now cleared before a buffer is reused.

Result:
There is no way to observe externally whether a buffer was reused or not.
2020-11-20 11:53:26 +01:00
Chris Vest
b0acb61f03 Explain the make build in the README.md file 2020-11-18 17:32:42 +01:00
Chris Vest
59b564ddc8 Add a docker-based build
Motivation:
Because of the current dependency on snapshot versions of the Panama Foreign version of OpenJDK 16, this project is fairly involved to build.

Modification:
To make it easier for newcomers to build the binaries for this project, a docker-based build is added.
The docker image is constructed such that it contains a fresh snapshot build of the right fork of Java.
A make file has also been added, which encapsulates the common commands one would use for working with the docker build.

Result:
It is now easy for newcomers to make builds, and run tests, of this project, as long as they have a working docker installation.
2020-11-18 17:16:37 +01:00
Chris Vest
a1785e8161 Move the MemorySegment based Buf implementation to its own package, and break the remaining bits of tight coupling. 2020-11-17 15:53:40 +01:00
Chris Vest
3efa93841e Rename the 'b2' package to 'api' 2020-11-17 15:40:13 +01:00
Chris Vest
0ad7f648ae Get the benchmarks running again 2020-11-17 15:34:46 +01:00
Chris Vest
b3aff17f5a Fix checkstyle so the build passes 2020-11-17 15:26:58 +01:00
Chris Vest
84e992c2c9 Move all files into the incubator repo 2020-11-17 15:26:58 +01:00
Chris Vest
07dd86dc56 Move ByteIterator to collect everything in one package 2020-11-17 15:26:58 +01:00
Chris Vest
11b0d69757 Simplify CompositeBuf.ensureWritable 2020-11-17 15:26:58 +01:00
Chris Vest
a535fb8cd8 Fix a bug in Buf.ensureWritable for pooled buffers
Motivation:
Resource lifetime was not correctly handled.

Modification:
We cannot call drop(buf) on a pooled buffer in order to release its memory from within ensureWritable, because that will send() the buffer back to the pool, which implies closing the buffer instance.
Instead, ensureWritable has to always work with untethered memory, so new APIs are added to AllocatorControl for releasing untethered memory.
The implementation already existed, because it was used by NativeMemoryCleanerDrop.

Result:
Buf.ensureWritable no longer closes pooled buffers.
2020-11-17 15:26:58 +01:00
Chris Vest
9c54aa43b4 Add Buf.ensureWritable
Motivation:
Having buffers that are able to expand to accommodate more data on demand is a great convenience.

Modification:
Composite and MemSeg buffers are now able to mutate their backing storage, to increase their capacity.
This required some tricky integration with allocators via AllocatorControl.
Basically, it's now possible to allocate memory that is NOT bound by any life time, so that it can be attached to the life time that already exists for the buffer being expanded.

Result:
Buffers can now be expanded via Buf.ensureWritable.
2020-11-17 15:26:58 +01:00
Chris Vest
ec9395d36e Run all Buf tests on slices as well.
Motivation:
Slices should behave identical to normal and composite buffers in all but a very select few aspects related to ownership.

Modification:
Extend the test generation to also produce slice-versions of nearly all test cases. Both slices that cover the entire buffer, and slices that only cover a part of their parent buffer.
Also fix a handful of bugs that this uncovered.

Result:
Buffer slices are now tested much more thoroughly, and a few bugs were fixed.
2020-11-17 15:26:57 +01:00
Chris Vest
bb5aff940f Update method names and javadocs
Motivation:
There's a desire to be able to clearly tell from a method name, whether or not it updates the reader or writer offsets.

Modification:
The accessor methods that take offsets as arguments (and thus do not update reader or writer offsets) have now been changed to follow a get/set naming pattern.
The methods that *do* update reader and writer offsets are still following a read/write naming pattern.

Result:
This makes it even more clear, whether or not the relative offsets are updated or not.
2020-11-17 15:26:57 +01:00
Chris Vest
ca32784fe8 Remove Codegen code generator for the buffer API.
Motivation:
With the number of primitive accessor methods reduced due to only having the configured byte order, it no longer makes sense to maintain the code generator.

Modification:
Delete Codegen.

Result:
Less code to maintain.
2020-11-17 15:26:57 +01:00
Chris Vest
d306998cea Migrate new buffer API tests to JUnit 5
Motivation:
This reduces the number of test classes because we can express the same with parameterized tests in JUnit 5.
This also removes the strictly tree-shaped dependencies between the tests.

Modification:
Change the new buffer API tests to use JUnit 5 and AssertJ.

Result:
A single test for all buffer implementations.
2020-11-17 15:26:57 +01:00
Chris Vest
a63f3e609d Add Buf.iterateReverse
Motivation:
We have the ability to iterate through the bytes in a buffer with the ByteIterator, but another important use case is being able to iterate through the bytes in reverse order.

Modification:
Add methods for iterating through the bytes in a buffer in reverse order. Also update the copyInto methods to make use of it. Also add a bit of missing javadocs, and argument checks.

Result:
We can also use ByteIterator for efficiently processing data within a buffer in reverse order.
2020-11-17 15:26:57 +01:00
Chris Vest
68795fb1a5 Introduce ByteIterator, and Buf.iterate
Motivation:
We need a simple API to efficiently iterate a buffer.
We've used the ByteProcessor so far, and while its internal iteration API is simple, it looses some efficiency by forcing code to only consider one byte at a time.

Modification:
The ByteIterator fills the same niche as the ByteProcessor, but uses external iteration instead of internal iteration.
This allows integrators to control the pace of iteration, and it makes it possible to expose methods for consuming bytes in bulk; one long of 8 bytes at a time.
This makes it possible to use the iterator in SIMD-Within-A-Register, or SWAR, data processing algorithms.

Result:
We have a ByteIterator for efficiently processing data within a buffer.
2020-11-17 15:26:57 +01:00