In short - A couple functions with a few pointers would fall dismally short of the 'mark'.
I agree with this. We need to define an interface that can accommodate the various forms of usage and implementation of the DFT. Even if the first implementation of that interface is limited in terms of performance. So far everyone has identified a number of things that may belong in a DFT interface. Maybe we need to work on a list. However I don't know if the original poster is up for this kind of work. He proposed a simple implementation of FFT that covered a single usage. Some of the things to support so far in the interface include: * Abstraction of data it operates on (std::complex, fixed point, ...) * Abstraction of dimensionality * Abstraction of memory layout of data (Is it strided, are the real/imag interleaved, if doing multiple dimensions how are they arranged in memory). Using iterators is probably sufficient here * Size of the transform (supporting non power of 2 is important) * Does it use temporary storage or not On this point I think this is a must. If for example you want to support using FFTW as a backend implementation, you want to be able to store data across transform calls. * Is the transform to happen in-place or not * Do we allow definition of phase rotations as part of the interface or require them as pre/post processing steps I am not sold on this as part of the transform itself. * Do we want to support (probably separate interface) real (vs complex) DFT transforms? * How will we support backend customizations? Allowing a single interface to be backed by say FFTW is probably a very good thing in this situation. Not just FFTW but many embedded platforms provide their own custom implementation of the FFT that a user may wish to utilize, but wrap it in a compatible API. I dont agree that we should have one "transform" that is specialized for DFT, DCT, ... But whatever interface we define is probably worth thinking how it applies to other similar transforms. I.e. The usage could be almost identical. I have been looking at this as I am working on a concept for a library called Boost.GAL (Same as GIL but for audio). In doing so I feel that GIL has been made too specific working on 2D data for images only, as opposed to a DSP library that can work in N dimensions. I.e. If we define a DFT or DCT, having that same algo work both for a Graphics or Audio library and any other kind of DSP application is probably a good thing.