2014/1/14 Kenneth Adam Miller
By the way, I'm working on a master's thesis, so I frequently skip sleep. Sometimes after a lack of sleep, getting across precisely what is needed/understood can take some an iteration or two :)
Good luck with your thesis ;-) Btw, I think top-posting is inappropriate on this list. On Tue, Jan 14, 2014 at 10:15 AM, Kenneth Adam Miller <
kennethadammiller@gmail.com> wrote:
Pretty much on performance concerns. I know that there's at least going to be one copy performed while doing the compression, from uncompressed to compressed. Here's how I do it:
filtering_ostream *fos = new filtering_ostream(); fos->push(bzip2_compressor()); string *x = acquireStringFromPool(); //This is just a blocking pointer return that reaches into a list of string *, each that are allocated with new string(30000,0); (it's multithreaded, ok lol :) ) fos->push(boost::iostreams::back_insert_device<string>(x)); //This is what I was searching for all along.
Doesn't this append your 30k string? If a preallocated chunk of memory is ok for you, check out vector::reserve() ;-) then later, when I want to write to fos I do,
*fos << *doc; //I go straight from container to compression.
Maybe my specifications that "I don't want to find that it's copying at all" were a bit weird, because obviously it has to move the data right? I'm just saying that most of the examples I would see would be something like
compress(string x) { stringstream ss(x); //unnecessary initialization in my case, couldn't find an example without this //something similar to what I did... }
OK, so did you measure performance of your solution compared to the above example? And then to some version with std::vector with reserved memory? My suggestions have nothing to do with compression and streams. But you might be optimizing prematurely here. HTH, Kris