On 17-Feb-16 8:02 PM, Robert Ramey wrote:
On 2/17/16 1:25 AM, Vladimir Prus wrote:
- It seems you have some issues on Cygwin - Steven has provided a patch - You haven't had the time to look into it yet
So, it seems that on the issue of cygwin testing, you are the person who has to make the next step. Is that correct?
LOL - I'm a volunteer, I don't HAVE to do anything.
Is anybody on this list directly paid for his work on Boost? ;-)
But I do have to make a choice. My options are
a) spend time with b2 development along the lines that Stephen has suggested.
Steven has provided a patch, you only need to apply it and try again. Do you think you can allocate 5 minutes for this some time soon?
b) presume that testing on my os shows that the serialization library is indeed correct and ignore the testing failures on cygwin and maybe? mingw.
c) ignore the failures on the develop test matrix due to changes in develop of other libraries and just merge from develop into release.
As a release manager, I would advise against this approach - because if things break in master, all options at our disposal will be bad.
d) do nothing and wait for other stuff to get fixed. The serialization library won't get the latest improvements, but it won't break anything either. That's the option I chose for 1.59. I can do it again.
I fix bugs and check enhancements to the library on my local system. At this point I presume they are correct and that I haven't introduced new bugs. I look to the test system to prove me wrong. But right now for the reasons I've cited, it can't do that. I need the help it's supposed to provide but it's providing it. I'm not so concerned about the next release as I am about this process working smoothly. I've made several suggestions about how we can make this simpler - which amount to making the test system development look more like the rest of boost.
Could we adopt a more iterative approach? As you point out, we're volunteers, so a large and vague task like 'write a test suite for regression report generator' is both hard to schedule, and is high-risk, given that such a test suite might not fix your actual problems. It would be more convenient to fix immediate issues, adding tests as we go - after all, that's how Boost.Build got to hundreds of testcases, and I see no reason why we should do differently for regression report generator. If I understand correctly, the current issues for you are: - Shared library testing on OS El Capitan. I will take a look. - Testing on cygwin. Patch was provided, it seems that your testing of said patch is still the best approach. - Some unspecified issues with function visibility. If you need help with this, could you post a separate email. - Issues where Spirit either affects Serialization, or produces too many warnings causing everything else in the log to be truncated. If this still an issue, could you post a separate email detailing the problem? Thanks, -- Vladimir Prus http://vladimirprus.com