Re: [boost] What has become of John Maddock's implementation of the Remez algorithm for Chebychev approximation?
On 6/15/21 11:48 PM, Joachim Wuttke via Boost wrote:
Until v1.53, Boost Math had doc pages [1,2] on the Remez algorithm. This algorithm [3] computes the coefficents of the Chebyshev polynomial that approximates a given function. Page [2] refers to actual code
In addition to John's reply, I wanted to add that Boost.Math has direct support for Chebyshev polynomials now: https://www.boost.org/doc/libs/release/libs/math/doc/html/math_toolkit/sf_po...
Dear John, dear Bjorn, thank you very much for your answers. If I understand correctly, function chebyshev_transform::chebyshev_transform solves the minimax approximation problem in a completely different way than the Remez algorithm, but results should be of similar quality? Kind greetings, Joachim
On 16/06/2021 14:03, Joachim Wuttke via Boost wrote:
Dear John, dear Bjorn,
thank you very much for your answers.
If I understand correctly, function chebyshev_transform::chebyshev_transform solves the minimax approximation problem in a completely different way than the Remez algorithm, but results should be of similar quality?
Exactly. Depending on the method used and your input conditions you will end up with subtly different results, but should end up with a set of coefficients that are "in the zone". The actual minima can be broad and bumpy if you see what I mean. Be aware that even smooth functions can be resistant to decent approximations - for example I have never managed to generate any for the elliptic integrals, and haven't heard of any one else succeeding either (I haven't looked in a while though), even though the functions are smooth and parabola like. Whatever method you use, you will need an accurate implementation of the function (which may be super slow - that doesn't matter at this stage - and perhaps calculated at extended precision via numeric integration or whatever). You may also need to "divide out" whatever the function converges to at it's limits. Some more info here: https://www.boost.org/doc/libs/1_76_0/libs/math/doc/html/math_toolkit/remez.... Good luck ;) John. -- This email has been checked for viruses by Avast antivirus software. https://www.avast.com/antivirus
Be aware that even smooth functions can be resistant to decent approximations - for example I have never managed to generate any for the elliptic integrals, and haven't heard of any one else succeeding either (I haven't looked in a while though), even though the functions are smooth and parabola like.
Interesting, and slightly worrying, given Trefethen's claims about the broad application domain of Chebychev polynomials. Is the difficulty with elliptic integrals discussed in any publication? Did you contact the chebfun team? Right now, I am moderately successful with a Python implementation, https://github.com/chebpy/chebpy/issues/38 Regards, Joachim
On Jun 16, 2021, at 21:23, Joachim Wuttke via Boost
wrote: Is the difficulty with elliptic integrals discussed in any publication?
A very difficult subject. The "smoothness" and "parabolicity" of those is highly misleading. I would not expect any kind of polynomials to approximate elliptic integrals very well because the radius of convergence of normal power series for it is finite, and LESS THAN the range of the argument. From my experience with these functions, I would substract an approximant based on a pole-zero product expansion and build a polynomial approximand for the remainder. A classic paper is Luke in 1968: https://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226825-3/S002... https://www.ams.org/journals/mcom/1968-22-103/S0025-5718-1968-0226825-3/S002... Cheers, Kostas
On 6/16/21 3:03 PM, Joachim Wuttke via Boost wrote:
If I understand correctly, function chebyshev_transform::chebyshev_transform solves the minimax approximation problem in a completely different way than the Remez algorithm, but results should be of similar quality?
If I remember Trefetchen correctly, Remez reduces the worst-case error (infinity norm) and Chebyshev reduces the average error (L_2 norm.)
participants (4)
-
Bjorn Reese
-
Joachim Wuttke
-
John Maddock
-
Kostas Savvidis