The inputs are scaled such that +/-1.0 is represented by +/-4096. So twelve bits after the binary point plus sign bit.
Thing is the FFT is effectively multiplies each sample of the input by each corresponding sample of sin and adds them up. So for inputs of +/-1.0 we have outputs that in the range from +/-512.0. It also does this for cos and at all possible frequencies.
The up shot is we need 22 bits to accumulate the results, before converting to magnitudes that is. If the sample size was doubled we would need another bit and so on.
I thought perhaps we could divide each intermediate result by 512 as we went along but I fear that might throw away a lot of useful information.
I have yet to see a way to reduce the memory requirements. Could go to 24 bits but that's a pain and would slow things down.
Comments
Good question and one that puzzled me a bit.
The inputs are scaled such that +/-1.0 is represented by +/-4096. So twelve bits after the binary point plus sign bit.
Thing is the FFT is effectively multiplies each sample of the input by each corresponding sample of sin and adds them up. So for inputs of +/-1.0 we have outputs that in the range from +/-512.0. It also does this for cos and at all possible frequencies.
The up shot is we need 22 bits to accumulate the results, before converting to magnitudes that is. If the sample size was doubled we would need another bit and so on.
I thought perhaps we could divide each intermediate result by 512 as we went along but I fear that might throw away a lot of useful information.
I have yet to see a way to reduce the memory requirements. Could go to 24 bits but that's a pain and would slow things down.