[boost.lockfree] spsc_queue preallocated memory block
Hi all, I have a feature request for Boost.lockfree (spsc_queue in particular). When using the runtime sized version, could there possibly be a constructor that takes max_size as well as a T* that is then used as the memory buffer? This would remove the need for a specialized allocator if I want to quickly initialize the buffer with memory that does not come from the default std allocator. Kind regards, Philip Bennefall
I have a feature request for Boost.lockfree (spsc_queue in particular). When using the runtime sized version, could there possibly be a constructor that takes max_size as well as a T* that is then used as the memory buffer? This would remove the need for a specialized allocator if I want to quickly initialize the buffer with memory that does not come from the default std allocator.
this would only work for spsc_queue, as the other data structures will have some memory-overhead ... but even for spsc_queue, i'm don't like this too much, as (a) the same can be achieved using the allocator (b) it will mess up the implementation quite a bit i might consider a patch, if it does not complicate the implementation too much ... but in general i guess it is cleaner to provide an allocator that wraps your pre-allocated memory. tim
On 10/25/2014 10:15 PM, Tim Blechmann wrote:
I have a feature request for Boost.lockfree (spsc_queue in particular). When using the runtime sized version, could there possibly be a constructor that takes max_size as well as a T* that is then used as the memory buffer? This would remove the need for a specialized allocator if I want to quickly initialize the buffer with memory that does not come from the default std allocator. this would only work for spsc_queue, as the other data structures will have some memory-overhead ... but even for spsc_queue, i'm don't like this too much, as (a) the same can be achieved using the allocator (b) it will mess up the implementation quite a bit
i might consider a patch, if it does not complicate the implementation too much ... but in general i guess it is cleaner to provide an allocator that wraps your pre-allocated memory.
tim
Thanks for the quick response, Tim. I understand where you're coming from. I will hack around a little, and if I come up with anything reasonable I will submit a patch. I would also like to take the opportunity to thank you for this great implementation. I am in the process of integrating it in an audio mixer chain as the final destination for the data, and it has been a pleasure to work with. Kind regards, Philip Bennefall
participants (2)
-
Philip Bennefall
-
Tim Blechmann