Commit Graph

280 Commits

Author SHA1 Message Date
Chris Vest
d4a54d3828 Fix typo 2021-05-28 13:55:58 +02:00
Chris Vest
0aa2853cf3 Fix a bug in MemSegBuffer 2021-05-28 13:55:55 +02:00
Chris Vest
a6b81c89ef Fix test failures in ByteToMessageDecoderTest 2021-05-28 12:23:16 +02:00
Chris Vest
b2bf0029be Fix adaptor tests 2021-05-28 10:58:37 +02:00
Chris Vest
050db15e07 Fix bug where copy with over-sized length threw a wrong exception 2021-05-27 17:41:40 +02:00
Chris Vest
b1b2c983f8 Do less aggressive test sampling when running tests locally
Instead of filtering out 95% of test samples, now only filter out 85%.
2021-05-27 17:38:38 +02:00
Chris Vest
05d76c27c1 Hide isOwned, countBorrows, and acquire from the public API, even on CompositeBuffer 2021-05-27 17:34:40 +02:00
Chris Vest
1c25fa88b7 Fix test failures coming from removal of slice and introduction of copy. 2021-05-27 17:06:30 +02:00
Chris Vest
707e5e2afb Remove the slice methods, add copy methods 2021-05-27 14:07:31 +02:00
Chris Vest
bfa8fd0b1f Make tests pass after removing acquire from the public API 2021-05-27 11:39:57 +02:00
Chris Vest
b8cfd0768e Fixes for compilation and running tests with UnsafeBuffer implementation 2021-05-26 18:31:28 +02:00
Chris Vest
f0ee2e1467 Remove acquire from the public API
This is a step toward effectively eliminating reference counting.
Reference counting is only needed when the memory in buffers can be shared.
If we remove all forms of sharing, then the buffers would be in an owned state at all times.
Then we would no longer need to worry about the state of the buffers before calling, e.g. `ensureWritable` and methods like that.

Just removing `acquire` is not enough; we also need to remove the `slice` method.
In this commit we are, however, starting with `acquire` because doing so requires rearranging the type hierarchy and the generics we have in play.
This was not an easy exercise, but for the purpose of record keeping, it's useful to have that work separate from the work of removing `slice`.
2021-05-26 17:19:26 +02:00
Chris Vest
aaf8e294cc
Merge pull request #68 from netty/modules
Split the repo into multiple modules and make building with Java 11 possible
2021-05-22 09:32:46 +02:00
Chris Vest
b710546dd5 Short readme update on Java 11 support 2021-05-21 22:09:37 +02:00
Chris Vest
0267afc0cd Remove redundant step 2021-05-21 22:03:03 +02:00
Chris Vest
408350622d Debug issue with the maven cache 2021-05-21 21:53:57 +02:00
Chris Vest
acf9f8b4fb Publish test reports for the Java 11 build 2021-05-21 19:17:42 +02:00
Chris Vest
a1f943c8ae Cache the Maven repository for the Java 11 build 2021-05-21 19:15:31 +02:00
Chris Vest
a9b8189aa1 Add a Java 11 build 2021-05-21 18:38:34 +02:00
Chris Vest
9dc1d533e3 Fix remaining tests and make the build work on Java 11 2021-05-21 17:28:07 +02:00
Chris Vest
1c3b27f9e0 Further reduce memory overhead of PooledBufferAllocator
… by lazily allocating PoolSubpages inside of the PoolArenas.
By default, a pooled allocator creates 32 arenas, and each arena, by default, makes room for 39 PoolSubpage size classes, which all told is 1.248 objects that serve no purpose other than as headers and locks for linked lists.
Each of these objects is at least 81 bytes, plus whatever the JVM adds on top.
This might not sound like much, but in our testing we'll be creating many thousands of allocators, and then it really adds up.
2021-05-21 15:37:59 +02:00
Chris Vest
e6f867cc5f Remove some deprecated methods 2021-05-21 15:03:48 +02:00
Chris Vest
7e379fd6ad Make sure to copy test reports out of the finished build container 2021-05-21 14:34:33 +02:00
Chris Vest
1143223407 First draft of splitting the repo into multiple modules and allowing builds with Java 11 2021-05-21 14:04:23 +02:00
Chris Vest
99cddf7749
Merge pull request #67 from netty/pooling-allocator
Port over the pooling buffer allocator from Netty
2021-05-18 22:38:32 +02:00
Chris Vest
0105e5231d Remove the SizeClassedMemoryPool implementation
And fix the remaining test failures for the PooledBufferAllocator.
The PooledBufferAllocator now also keeps its chunks alive, even after closing the pool, as long as there are allocated buffers that refer to the memory.
The pool now clears all of its relevant internal references when closed, allowing the GC to reclaim all of the pooled memory, assuming no allocated buffers remain.
2021-05-18 18:20:32 +02:00
Chris Vest
dec3756e6d Buffers from the pooling allocator must be able to return memory to the pool if the buffer objects are leaked. 2021-05-17 18:18:31 +02:00
Chris Vest
03743fca0d Pooling allocator cleanups and checkstyle fixes 2021-05-17 16:47:43 +02:00
Chris Vest
12b38234e5 Make sure that every allocation get their own unique Drop instance.
This allows the pooling allocator to precisely control how each allocation should be dropped.
This is important to the pooling allocator, because it needs to know what arena, chunk, page, run, etc. is being freed, exactly.
2021-05-17 15:15:19 +02:00
Chris Vest
fa75c81c6c Fix checkstyle issues 2021-05-12 16:48:24 +02:00
Chris Vest
670cca2d43 Fix more tests
Fundamental design issues remain, though.
Drops can end up being shared across instances with different memory allocations, and this means we can't currently attach the de-allocation information to the drop instance.
We also cannot use the AllocationControl instance for this because it has the same problem.
2021-05-12 16:05:09 +02:00
Chris Vest
6b62b3a6c7 The pooling buffer allocator must allocate buffers with native byte order by default 2021-05-12 10:52:51 +02:00
Chris Vest
481b2ddd3d Switch to the new pooling allocator by default for heap buffers 2021-05-12 10:52:22 +02:00
Chris Vest
1daa0685dc Add license headers 2021-05-12 10:49:09 +02:00
Chris Vest
0c49c887e6 Renames to align with the new API 2021-05-12 10:46:58 +02:00
Chris Vest
b4b0afd787 Second, more complete draft of porting over the pooling allocator from Netty 2021-05-12 10:44:33 +02:00
Chris Vest
ae2abdd2aa First incomplete draft of porting over the pooling allocator 2021-05-11 14:57:42 +02:00
Chris Vest
e6a238b14d Add features to MemoryManager
The ability to allocate a buffer on a sub-region of some recoverable memory will be useful when porting over the arena-based pooling allocator from Netty.
2021-05-11 14:57:42 +02:00
Chris Vest
fc7ba4522f
Merge pull request #66 from netty/panama-fixups
Remove hacks related to the now lifted ByteBuffer/MemorySegment restrictions
2021-05-11 13:13:29 +02:00
Chris Vest
7b384c3bf2 Remove hacks related to the now lifted ByteBuffer/MemorySegment restrictions 2021-05-11 11:35:38 +02:00
Chris Vest
35b1d4a4fe
Merge pull request #62 from netty/hide-refcounts
Hide Rc.countBorrows
2021-05-10 10:25:13 +02:00
Chris Vest
ccaed0ae7b
Merge pull request #61 from netty/composite-split
Add splitComponentsFloor and splitComponentsCeil
2021-05-07 17:27:15 +02:00
Chris Vest
f19f04291e
Merge pull request #65 from netty/send-composite
Fix composite buffer send bug
2021-05-07 12:40:20 +02:00
Chris Vest
9db454ffe5 Fix composite buffer send bug
Fix a bug in CompositeBuffer.send, where the received buffer would not have ownership.
The fix is to avoid incrementing the reference count in the composite buffer constructor call used in the transferOwnership function.
2021-05-07 12:02:55 +02:00
Chris Vest
1eece080af Fix up tests that relied on the borrow count 2021-05-07 11:31:46 +02:00
Chris Vest
ef714c90d9 Hide Rc.countBorrows
The state that people really care about is whether or not an Rc has ownership.
Exposing the reference count will probably just confuse people.
The reference count is still exposed on RcSupport because it may be (and is, in the case of ByteBufAdaptor) needed to support implementation details.
2021-05-07 11:25:42 +02:00
Chris Vest
556d0acc89 Add splitComponentsFloor and splitComponentsCeil
These methods make it possible to accurately split composite buffers at component boundaries, either by rounding the offset down or up to the nearest component boundary, respectively.

Composite buffers already support the split method, but it is hard for client code to predict precisely where component boundaries are placed inside composite buffers.
When split is used with an offset that does not land exactly on a component boundary, then the internal component that the offset lands on will also be split.
This may make it harder to precisely reason about memory life cycles and reuse.
2021-05-07 10:41:46 +02:00
Chris Vest
83643a5dc9
Merge pull request #60 from netty/build-fixes
Make the build use less space on the CI host
2021-05-07 08:40:35 +02:00
Chris Vest
24b78e4a6b Invalidate existing cache keys 2021-05-06 23:26:16 +02:00
Chris Vest
0cd09f5f8b Make the build use less space on the CI host
High space usage could cause the docker layer cache to fail while packaging the layers.
2021-05-06 17:31:01 +02:00