Commit Graph

267 Commits

Author SHA1 Message Date
Chris Vest b710546dd5 Short readme update on Java 11 support 2021-05-21 22:09:37 +02:00
Chris Vest 0267afc0cd Remove redundant step 2021-05-21 22:03:03 +02:00
Chris Vest 408350622d Debug issue with the maven cache 2021-05-21 21:53:57 +02:00
Chris Vest acf9f8b4fb Publish test reports for the Java 11 build 2021-05-21 19:17:42 +02:00
Chris Vest a1f943c8ae Cache the Maven repository for the Java 11 build 2021-05-21 19:15:31 +02:00
Chris Vest a9b8189aa1 Add a Java 11 build 2021-05-21 18:38:34 +02:00
Chris Vest 9dc1d533e3 Fix remaining tests and make the build work on Java 11 2021-05-21 17:28:07 +02:00
Chris Vest 1c3b27f9e0 Further reduce memory overhead of PooledBufferAllocator
… by lazily allocating PoolSubpages inside of the PoolArenas.
By default, a pooled allocator creates 32 arenas, and each arena, by default, makes room for 39 PoolSubpage size classes, which all told is 1.248 objects that serve no purpose other than as headers and locks for linked lists.
Each of these objects is at least 81 bytes, plus whatever the JVM adds on top.
This might not sound like much, but in our testing we'll be creating many thousands of allocators, and then it really adds up.
2021-05-21 15:37:59 +02:00
Chris Vest e6f867cc5f Remove some deprecated methods 2021-05-21 15:03:48 +02:00
Chris Vest 7e379fd6ad Make sure to copy test reports out of the finished build container 2021-05-21 14:34:33 +02:00
Chris Vest 1143223407 First draft of splitting the repo into multiple modules and allowing builds with Java 11 2021-05-21 14:04:23 +02:00
Chris Vest 99cddf7749
Merge pull request #67 from netty/pooling-allocator
Port over the pooling buffer allocator from Netty
2021-05-18 22:38:32 +02:00
Chris Vest 0105e5231d Remove the SizeClassedMemoryPool implementation
And fix the remaining test failures for the PooledBufferAllocator.
The PooledBufferAllocator now also keeps its chunks alive, even after closing the pool, as long as there are allocated buffers that refer to the memory.
The pool now clears all of its relevant internal references when closed, allowing the GC to reclaim all of the pooled memory, assuming no allocated buffers remain.
2021-05-18 18:20:32 +02:00
Chris Vest dec3756e6d Buffers from the pooling allocator must be able to return memory to the pool if the buffer objects are leaked. 2021-05-17 18:18:31 +02:00
Chris Vest 03743fca0d Pooling allocator cleanups and checkstyle fixes 2021-05-17 16:47:43 +02:00
Chris Vest 12b38234e5 Make sure that every allocation get their own unique Drop instance.
This allows the pooling allocator to precisely control how each allocation should be dropped.
This is important to the pooling allocator, because it needs to know what arena, chunk, page, run, etc. is being freed, exactly.
2021-05-17 15:15:19 +02:00
Chris Vest fa75c81c6c Fix checkstyle issues 2021-05-12 16:48:24 +02:00
Chris Vest 670cca2d43 Fix more tests
Fundamental design issues remain, though.
Drops can end up being shared across instances with different memory allocations, and this means we can't currently attach the de-allocation information to the drop instance.
We also cannot use the AllocationControl instance for this because it has the same problem.
2021-05-12 16:05:09 +02:00
Chris Vest 6b62b3a6c7 The pooling buffer allocator must allocate buffers with native byte order by default 2021-05-12 10:52:51 +02:00
Chris Vest 481b2ddd3d Switch to the new pooling allocator by default for heap buffers 2021-05-12 10:52:22 +02:00
Chris Vest 1daa0685dc Add license headers 2021-05-12 10:49:09 +02:00
Chris Vest 0c49c887e6 Renames to align with the new API 2021-05-12 10:46:58 +02:00
Chris Vest b4b0afd787 Second, more complete draft of porting over the pooling allocator from Netty 2021-05-12 10:44:33 +02:00
Chris Vest ae2abdd2aa First incomplete draft of porting over the pooling allocator 2021-05-11 14:57:42 +02:00
Chris Vest e6a238b14d Add features to MemoryManager
The ability to allocate a buffer on a sub-region of some recoverable memory will be useful when porting over the arena-based pooling allocator from Netty.
2021-05-11 14:57:42 +02:00
Chris Vest fc7ba4522f
Merge pull request #66 from netty/panama-fixups
Remove hacks related to the now lifted ByteBuffer/MemorySegment restrictions
2021-05-11 13:13:29 +02:00
Chris Vest 7b384c3bf2 Remove hacks related to the now lifted ByteBuffer/MemorySegment restrictions 2021-05-11 11:35:38 +02:00
Chris Vest 35b1d4a4fe
Merge pull request #62 from netty/hide-refcounts
Hide Rc.countBorrows
2021-05-10 10:25:13 +02:00
Chris Vest ccaed0ae7b
Merge pull request #61 from netty/composite-split
Add splitComponentsFloor and splitComponentsCeil
2021-05-07 17:27:15 +02:00
Chris Vest f19f04291e
Merge pull request #65 from netty/send-composite
Fix composite buffer send bug
2021-05-07 12:40:20 +02:00
Chris Vest 9db454ffe5 Fix composite buffer send bug
Fix a bug in CompositeBuffer.send, where the received buffer would not have ownership.
The fix is to avoid incrementing the reference count in the composite buffer constructor call used in the transferOwnership function.
2021-05-07 12:02:55 +02:00
Chris Vest 1eece080af Fix up tests that relied on the borrow count 2021-05-07 11:31:46 +02:00
Chris Vest ef714c90d9 Hide Rc.countBorrows
The state that people really care about is whether or not an Rc has ownership.
Exposing the reference count will probably just confuse people.
The reference count is still exposed on RcSupport because it may be (and is, in the case of ByteBufAdaptor) needed to support implementation details.
2021-05-07 11:25:42 +02:00
Chris Vest 556d0acc89 Add splitComponentsFloor and splitComponentsCeil
These methods make it possible to accurately split composite buffers at component boundaries, either by rounding the offset down or up to the nearest component boundary, respectively.

Composite buffers already support the split method, but it is hard for client code to predict precisely where component boundaries are placed inside composite buffers.
When split is used with an offset that does not land exactly on a component boundary, then the internal component that the offset lands on will also be split.
This may make it harder to precisely reason about memory life cycles and reuse.
2021-05-07 10:41:46 +02:00
Chris Vest 83643a5dc9
Merge pull request #60 from netty/build-fixes
Make the build use less space on the CI host
2021-05-07 08:40:35 +02:00
Chris Vest 24b78e4a6b Invalidate existing cache keys 2021-05-06 23:26:16 +02:00
Chris Vest 0cd09f5f8b Make the build use less space on the CI host
High space usage could cause the docker layer cache to fail while packaging the layers.
2021-05-06 17:31:01 +02:00
Chris Vest 3b8aabbd10
Merge pull request #57 from netty/docker-image-reduction
Make the docker image layers take up less space
2021-05-06 10:49:51 +02:00
Chris Vest 14a0f56660 Make the docker image layers take up less space
This should make the docker layer cache in our CI build more effective, and faster.
2021-05-05 22:01:04 +02:00
Chris Vest e4ea1d7806
Merge pull request #30 from netty/readme
Writeup of rationale behind the buffer API design
2021-05-05 18:53:34 +02:00
Chris Vest 86cc19bd76
Merge pull request #55 from netty/alloc-close
Clarify what it means to close an allocator
2021-05-05 16:39:17 +02:00
Chris Vest 5a0bf8de97 Update RATIONALE.adoc with CompositeBuffer updates and bifurcate/split rename 2021-05-05 16:20:11 +02:00
Chris Vest 2ac10d8e09 Update README after rebase 2021-05-05 16:09:53 +02:00
Chris Vest 385fb1ac27 Update section on composite buffers 2021-05-05 16:09:53 +02:00
Chris Vest 86f2326e0c Writeup of rationale behind the buffer API design 2021-05-05 16:09:53 +02:00
Chris Vest 7b48263184 Re-enable the cleaner tests and make them run faster 2021-05-05 16:09:11 +02:00
Chris Vest 2ab8dd65eb Remove unused imports 2021-05-05 16:09:11 +02:00
Chris Vest 44c476c461 Clarify what it means to close a BufferAllocator 2021-05-05 16:09:11 +02:00
Chris Vest 928f0bbb14
Merge pull request #54 from netty/const-bufs
Buffers as constants
2021-05-05 16:08:38 +02:00
Chris Vest 599c01b762 Make the buffer read-only state irreversible
This greatly simplifies the semantics around the const buffers.
When they can no longer be made writable, there is no longer any need for "deconstification".

I decided to call the method "makeReadOnly" to distinguish it from "asReadOnly" that is seen in ByteBuf and ByteBuffer. The latter two return read-only _views_ of the buffer, while makeReadOnly changes the state of the buffer in-place.
2021-05-05 12:30:52 +02:00