binomial distribution setter function
This is my first time posting in this mailing list. Not sure how to post to the developer only mailing list after reading the document for mailing list for a good half an hour. If I posted in he wrong mailing please let me know and kindly provide the developer mailing list. I am looking at the binomial.hpp file. I need to use the binomial object in a loop which requires to have high performance. I am thinking constructing the binomial object each time in the loop would be less efficient than if I construct a single object, the each time reset the p parameter and use the object. I am not sure why the setter method was not provided. 296 RealType success_fraction() const 297 { // Probability. 298 return m_p; 299 } 300 void set_success_fraction(RealType p) { 301 m_p=p; 302 } Line 296 is the getter method, I added the setter method at line 300. -- Kemin Zhou 858 366 8260
Am 18.01.20 um 07:57 schrieb Kemin Zhou via Boost:
I am looking at the binomial.hpp file. I need to use the binomial object in a loop which requires to have high performance. I am thinking constructing the binomial object each time in the loop would be less efficient than if I construct a single object, the each time reset the p parameter and use the object.
I am not sure why the setter method was not provided.
296 RealType success_fraction() const 297 { // Probability. 298 return m_p; 299 } 300 void set_success_fraction(RealType p) { 301 m_p=p; 302 } Line 296 is the getter method, I added the setter method at line 300.
As usual: Did you measure before making assumptions about performance? Next: Did you check what the ctor does? What is your reasoning for your statement? This is not meant to sound harsh but rather spark usual scientific work practices. You'll see that the constructor does nothing but check invariants. Your setter does not do so and hence is wrong (for some definition of wrong as usual) So the only overhead can be due to check of valid parameters. Depending on how you pass in the arguments this can even be removed, so try it first and measure where your performance suffers or use e.g. godbolt to check the assembly to verify assumptions. Then the better solution would be to provide a ctor that does not do verification, probably the policy system can be used if it isn't already. Again it needs to be argued why this would be required and how much benefit it brings. Regards, Alex
-----Original Message----- From: Boost
On Behalf Of Alexander Grund via Boost Sent: 20 January 2020 08:35 To: boost@lists.boost.org Cc: Alexander Grund Subject: Re: [boost] binomial distribution setter function Am 18.01.20 um 07:57 schrieb Kemin Zhou via Boost:
I am looking at the binomial.hpp file. I need to use the binomial object in a loop which requires to have high performance. I am thinking constructing the binomial object each time in the loop would be less efficient than if I construct a single object, the each time reset the p parameter and use the object.
I am not sure why the setter method was not provided.
296 RealType success_fraction() const 297 { // Probability. 298 return m_p; 299 } 300 void set_success_fraction(RealType p) { 301 m_p=p; 302 } Line 296 is the getter method, I added the setter method at line 300.
As usual: Did you measure before making assumptions about performance? Next: Did you check what the ctor does? What is your reasoning for your statement?
This is not meant to sound harsh but rather spark usual scientific work practices.
You'll see that the constructor does nothing but check invariants. Your setter does not do so and hence is wrong (for some definition of wrong as usual) So the only overhead can be due to check of valid parameters. Depending on how you pass in the arguments this can even be removed, so try it first and measure where your performance suffers or use e.g. godbolt to check the assembly to verify assumptions.
Then the better solution would be to provide a ctor that does not do verification, probably the policy system can be used if it isn't already. Again it needs to be argued why this would be required and how much benefit it brings.
I concur with this assessment. I suspect that you presume that construction is expensive, when it is really very cheap, only carrying out some quick sanity checks, and a few assignments. You need to be quite certain that these handful of instructions are on your critical path before doing something that will expose you to risks from passing a bad parameter. Paul A. Bristow PS Reminder, it you don't put your stuff inside try'n'catch blocks, you won't get any error messages helping you see what went wrong 😊 But of course that will slow things down a bit. But useful to have the checks until you are certain that your code is correct?
Sorry for not doing the speed check. My argument is purely based on looking at the code and counting the number of executions. The cost of constructing an object, and the code of updating only. In this situation, I have a tight loop, with the input values already validated thus can save the extra check inside the constructor. On Mon, Jan 20, 2020 at 2:15 AM Paul A Bristow via Boost < boost@lists.boost.org> wrote:
-----Original Message----- From: Boost
On Behalf Of Alexander Grund via Boost Sent: 20 January 2020 08:35 To: boost@lists.boost.org Cc: Alexander Grund Subject: Re: [boost] binomial distribution setter function Am 18.01.20 um 07:57 schrieb Kemin Zhou via Boost:
I am looking at the binomial.hpp file. I need to use the binomial object in a loop which requires to have high performance. I am thinking constructing the binomial object each time in the loop would be less efficient than if I construct a single object, the each time reset the p parameter and use the object.
I am not sure why the setter method was not provided.
296 RealType success_fraction() const 297 { // Probability. 298 return m_p; 299 } 300 void set_success_fraction(RealType p) { 301 m_p=p; 302 } Line 296 is the getter method, I added the setter method at line 300.
As usual: Did you measure before making assumptions about performance? Next: Did you check what the ctor does? What is your reasoning for your statement?
This is not meant to sound harsh but rather spark usual scientific work practices.
You'll see that the constructor does nothing but check invariants. Your setter does not do so and hence is wrong (for some definition of wrong as usual) So the only overhead can be due to check of valid parameters. Depending on how you pass in the arguments this can even be removed, so try it first and measure where your performance suffers or use e.g. godbolt to check the assembly to verify assumptions.
Then the better solution would be to provide a ctor that does not do verification, probably the policy system can be used if it isn't already. Again it needs to be argued why this would be required and how much benefit it brings.
I concur with this assessment.
I suspect that you presume that construction is expensive, when it is really very cheap, only carrying out some quick sanity checks, and a few assignments.
You need to be quite certain that these handful of instructions are on your critical path before doing something that will expose you to risks from passing a bad parameter.
Paul A. Bristow
PS Reminder, it you don't put your stuff inside try'n'catch blocks, you won't get any error messages helping you see what went wrong 😊
But of course that will slow things down a bit. But useful to have the checks until you are certain that your code is correct?
_______________________________________________ Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Kemin Zhou 858 366 8260
On Tue, 17 Mar 2020 at 15:27, Kemin Zhou via Boost
Sorry for not doing the speed check. My argument is purely based on looking at the code and counting the number of executions. The cost of constructing an object, and the code of updating only. In this situation, I have a tight loop, with the input values already validated thus can save the extra check inside the constructor.
You can and should not make assumptions based on the code (like Paul says). In case of random numbers/distributions, micro-bench-marking is very very difficult and bound to give the wrong result/conclusion. The only thing to do is, write all of the code [in a flexible way] and macro-benchmark (test the code in the context of your current application, subtle changes make huge differences). Iff all else fails, and you still need more speed, and you are running Intel cpu's [that's lots of if's], you can try the distributions in the Intel Performance Libraries, Math Kernel Library (MKL). This will undoubtedly require you to restructure you're code, you'll be wrapping a c-api. It WILL be faster, it is FREE to use, AVAILABLE on Windows/Linux/IOS. If you're running an AMD, you can try their math-lib, but you're of to a bad start [with AMD]. Lastly you're talking tight loops, is the code naturally parallel? If so, you could look at GPU's, OpenCL on Intel CPU/GPU's or one of the Graphic's card vendors. Fiddling with random-numbers is highly entertaining, but also a rabbit-hole, do the macro bench-marking and you are set to have to correct answer. That correct answer could well be very counter-intuitive. Extra instructions not necessarily slow things down, and I have proof that one can construct cases that more instructions improve overall through-put [possibly due to quirks in the scheduler]. One last thing, in this kind of situation always test things also, but not only, with (Thin-)LTO or LTCG turned on. And a last last one /O2 is max optimizations on MSVC, not as many think /Ox. On clang/gcc -O3 is not necessarily faster than -O2. If you can live with it, -fast-math might help as well, but that has important repercussions. degski -- @systemdeg "We value your privacy, click here!" Sod off! - degski "Anyone who believes that exponential growth can go on forever in a finite world is either a madman or an economist" - Kenneth E. Boulding "Growth for the sake of growth is the ideology of the cancer cell" - Edward P. Abbey
participants (4)
-
Alexander Grund
-
degski
-
Kemin Zhou
-
pbristow@hetp.u-net.com