As calling you a mathematician I refer to this preference to confine to confine to the absolute bounds, In this case the limit of sampled waveforms for Fourier operations.
Ah, I see what you mean.
I must chuckle at your statement about summation verses integration as even the real world summation approaches integration as the sample rate approaches infinity
Well yes, the more samples you have and more bits per sample the closer to reality you get.
But there is more to it than that.
As you said mathematically, or "ethereally" as you put it, the wave forms exist from minus infinity to plus infinity. But with the discrete transform we confined to a box of limited samples over a limited time. So really the thing only works for frequencies such that the waves "fit" in the box a whole number of times. This means that if you are looking at an input frequency of say 60.5Hz, as may be the case in Jays project, then all of a sudden we have a DC offset appearing in the FFT results. Or a whole slew of harmonics can turn up.
At this point you realize you need some windowing function on the input to neaten things up.
My conclusion is that it pays to always keep the reality of the limits of the calculation in mind.
On a related note, I have noticed a few times that programmers throw floating point maths at their problems and are then surprised when their results are blowing up. There is a tendency to be lazy and assume "Oh I have floats, they are super accurate and can handle huge ranges, I don't need to worry how I do my calculations."
I know I came to this conversation late but Fourier transforms just came to my attention. The name does not tell how important it is. The way my brain works I needed a graphic illustration of a DFT first. The math will now make more sense. At this point I want to be able to analyze the 'discrete Fourier transform'. I'm still having a bit of trouble understanding how a given amplitude and frequency can be extracted from a waveform.
Comments
Ah, I see what you mean.
Well yes, the more samples you have and more bits per sample the closer to reality you get.
But there is more to it than that.
As you said mathematically, or "ethereally" as you put it, the wave forms exist from minus infinity to plus infinity. But with the discrete transform we confined to a box of limited samples over a limited time. So really the thing only works for frequencies such that the waves "fit" in the box a whole number of times. This means that if you are looking at an input frequency of say 60.5Hz, as may be the case in Jays project, then all of a sudden we have a DC offset appearing in the FFT results. Or a whole slew of harmonics can turn up.
At this point you realize you need some windowing function on the input to neaten things up.
My conclusion is that it pays to always keep the reality of the limits of the calculation in mind.
On a related note, I have noticed a few times that programmers throw floating point maths at their problems and are then surprised when their results are blowing up. There is a tendency to be lazy and assume "Oh I have floats, they are super accurate and can handle huge ranges, I don't need to worry how I do my calculations."