Shop OBEX P1 Docs P2 Docs Learn Events
Review on Fourier Transform - Page 2 — Parallax Forums

Review on Fourier Transform

2»

Comments

  • Dave HeinDave Hein Posts: 6,347
    edited 2015-08-25 17:26
    In order to understand the FFT you need to understand the Fourier Transform, Fourier Series, Discrete Fourier Transform (DFT) and then the Fast Fourier Transform (FFT). Each one builds upon the ideas presented in the earlier ones.

    The Fourier Transform represents a continuous signal as a continuous function multiplied times sine and cosine functions. Since sine and cosine functions extend from minus infinity to infinity, the continuous signal also extends from minus infinity to infinity.

    The Fourier Series is a subset of the Fourier Transform that uses descrete frequencies for the sines and cosines. In addition, the frequencies that are used are an integer multiple of a single fundamental frequency. This property of the Fourier series requires that the continuous signal repeats at the same period as the fundamental frequency.

    The Discrete Fourier Transform is a Fourier Transform applied to sampled data. In the continuous signal domain, each sample is represented by the sin(x)/x function, which has the effect of limiting the bandwidth of the signal to the Nyquist frequency. The Nyquist frequency is one-half of the sampling frequency, and by band-limiting the signal it can be represent exactly by discrete samples.

    The DFT can represent a discrete signal that ranges from minus infinity to infinity with an infinite number of sine and cosine terms. However, in practice the DFT is normally applied to a finite number of samples of length N. This has the effect of making the DFT a discrete version of the Fourier Series, where all frequencies are a multiple of the fundamental frequency and limited to the Nyquist frequency.

    The FFT is an efficient method for computing an N-point DFT by taking advantage of symmetries in the sine and cosine basis functions. The sine function has even symmetry around the origin, and the cosine function has odd symmetry. The odd and even properties of the original signal can be extracted by performing simple add and subtract "butterfly" operations. The even portion of the signal only applies to the sine terms, and the odd portion of the signal only applies to the cosine terms. This odd/even extraction can be applied at successive stages to produce an effient method for implementing the DFT.

    The use of complex numbers "simplifies" the math used to do Fourier transform. Instead of using independent sine and cosine terms a single complex number can be used to represent a frequency term.



  • ErNaErNa Posts: 1,742
    Hi Dave, I do not see anything in what you wrote, I can not agrees. But: the discussion up to now shows, that the "feeling" for what all this means, is missing. It would be nice to have a text (in the end) where everybody, who is able to write a program in spin has a basic understanding, what the fourier transform is good for and why.
  • Heater.Heater. Posts: 21,230
    Dave Hein,

    I agree with everything you have said there.

    Except, there is that big leap from understanding the maths of the Fourier Transform to "The FFT is an efficient method for computing an N-point DFT by taking advantage of symmetries in the sine and cosine basis functions."

    All that bit reversal and other tricks are not obvious. Which is no doubt why the Cooley–Tukey paper caused a big splash in 1967. And why the author of the DSP book mentioned above said that most engineers could not write an FFT program from scratch.

    From a bit of a philosophical perspective, and from what limited mathematical skills I have, I have sometimes marvelled that after hours of lectures and pages of formula there is actually a simple and beautiful idea. Buried under a ton of blinding notation.

    I'm leaning towards the idea that this messy business with "i" is such a blinding notation for many and that we don't actually need it.




  • The bit reversal is a direct result of the even/odd separation. In the first stage the butterfly operation is applied to samples (0, N-1), (1, N-2), (2, N-3), and so on. On the second stage the pairs are (0, N/2-1), (0, N/2-2), etc. Subsequent stages use sample pairs from smaller and smaller blocks. This method of pairing results in the indices being in bit-reversed order. The simplest example of how this works is with the Walsh-Hadamard Transform, but the same concept applies to the FFT.

    I don't believe Joseph Fourier used complex numbers when he first defined the Fourier Transform. Looking at his original publication from 1822 doesn't seem to show any complex numbers, but it is expressed in terms of sines and cosines instead. At some point Euler's formula (e**ix = cos(x) + i * sin(x)) was used to make the math more concise.
  • Heater.Heater. Posts: 21,230
    Dave,

    Yep. It only took two decades for my penny to drop on that bit-reversal thing. I guess I'm a bit slow. :depressed:

    The question then is: Why does reordering things by recursively taking the odd and even samples suddenly result in a massive speed up of the calculation? Not so obvious I think.

    I'm glad to here that Joseph did not use any "i" originally. That shows that concept of imaginary numbers is not needed to get a handle on this. Concise notation is good and all but it is one of those blinders in maths that is a barrier to those who are less fluent, preventing them seeing what is going on. For example, Euler's e**ix thing. Every time you see that it's hiding the cos and sin behind it.

    Do you have a link to that 1822 paper?

  • YanomaniYanomani Posts: 1,524
    edited 2015-08-26 06:02
    Hi Heater

    Kindly asking for Dave Hein's permission, and yours also, for sure, provided you you don't mind to do some readings in french, I believe I can say "voilà la originelle, digitalisé":

    https://ia802606.us.archive.org/24/items/thorieanalytiqu00fourgoog/thorieanalytiqu00fourgoog.pdf

    Enjoy

    Henrique

    P.S. A 22 page long, english reader's friendly introduction to Fourier's work can be found at:

    onlinelibrary.wiley.com/doi/10.1029/1998RG900006/pdf
  • John A. ZoidbergJohn A. Zoidberg Posts: 514
    edited 2015-08-26 06:10
    I learned to write that FFT (256-point) into an ARM Cortex-M4 board, STM32 Nucleo. It's for one of my assignments when I'm a staff in my own college. I used all complex numbers in that work. I verified the results returned through Matlab - that code is actually correct.

    The butterfly diagram is the most confusing part of the whole thing. I would understand this for a few weeks, and if I'm not there to see the diagram for a few weeks, I forget how it works, until I sit down and try to look at it again.

    If I look at the diagram from a distance, I manage to understand it better than I look at it really closely.

    Also, putting all the twiddle factors into the board isn't fun. I pre-calculate those and put them into constants since ARM microcontrollers don't have much RAM, unless if there's the SDRAM expansion.

    If I'm not mistaken, the FFT exploits the periodicity - hence you do not have to manually calculate what is above 360 degrees.

    With SIMD (single instruction multiple data) instruction set, you can calculate both real and complex parts simultaneously, hence speeding up the FFT process. However, only those with ARM Cortex A-series contain them (the NEON ones?).
  • ErNaErNa Posts: 1,742
    The paper I pointed to in the first comment gives an overview on Fourier transform (not simple way) and shows, why different algorithms are used best depending on hardware, realtime conditions, ... cost of memory. Read introduction and conclusion. I had used FFT more than 30 years ago, moving a BASIC program to FORTRAN. I did not understand what this address flipping does, but the program showed the right results. To me it was surprizing that even at that time nowhere was mentioned, that FFT can be used for any field length, not only 2^N. Now in this overview I got that information. The case 2^N just results in a very simple implementation. Knowledge just vanishes. I now checked: Hamilton in 1843 reinvented the Quaternion as an expansion of complex numbers. So the base was laid long before our time and the question stays: how can people imagine thoughts, that generations later have problems to follow!

    The same is true for Eulers identity:

    The concept of power (a^b) was developed to simplify multiplication in a special case. Firstly, b was a natural number. Later the question came up: what if b is a real number. And then the complex numbers were invented and the question arose: what does an complex exponent mean. Or so.

    https://en.wikipedia.org/wiki/Euler's_formula

    But we are going quite far now.

    I want to come back to heaters very nice example paper strip.
    He had a strip of paper along an x-axis and twisted the strip. Walking along x the direction of the papers width rotated and was seen as a rotating vector in the y/z-plane. This is a two dimensional vector with components sin(phi), cos(phi). But there is the same moment a vector pointing from origin to the vectors tip, now 3-dimensional x, sin(phi), cos(phi).

    The same picture, but different interpretation. Now we have a vector in space.

    But there is a third interpretation: a one-dimensional vector along the x-axis. And the value at the position x is a complex number. If we only look to the real part of the number, we see a sinoidal function.
    So this is the vector I talk about when there is a signal seen as a vector: vector = ( v1, v2, v3, ,,,,, vn) means: a vector of n dimensions determined by n values. In analysis we call this a function. in algebra, we call it a vector.

    That is what we have to get. And now we can think about what symmetry means and how this makes life more easy.



  • Heater.Heater. Posts: 21,230
    edited 2015-08-26 10:15
    Let's not skip Euler so fast. His formula exactly describes that twisted strip of paper we were imagining. In fact wikipedia has a nice picture of that strip of paper here:
    https://en.wikipedia.org/wiki/File:Euler's_Formula_c.png

    But I do want to avoid Euler:

    1) We have the definition of the "value" of our signal as simple a pair of numbers (x, y) as in the picture. And that what we see on the oscilloscope or graph is the "magnitude" of that pair of numbers.

    2) We have the rules for adding and multiplying those values. The multiply rule being a bit weird.

    3) We can demonstrate the those rules have the useful property that multiplication results in a rotation of the point around the z axis.

    With that we should be able to explain the FFT without ever mentioning Euler, or "i" or "e". Just using the simple math everyone knows from high school.

  • ErNaErNa Posts: 1,742
    Euler and i indeed are not needed. The reason is: if you have your original signal then this signal can be split into a symmetrical and asymmetrical part. If you transform those two signals separately, you will get two separat results: one for sinusoidal and one for cosinusoidal frequency spectra. If you now which to have one spectrum for power density, you have to add sin and cos parts in a special way: see them as components of a vector or as real and imaginary part of a complex number. In both cases, different way to determine absolute value result in the same data.
  • ErNaErNa Posts: 1,742
    edited 2015-08-28 12:15
    When difficult tasks turn out to be solved in a very simple way.

    A key point is to understand symmetry. Imagine to have a duplex data cable: grnd, receive, transmit. And male/female connectors. And you have cables segments male/male, male/female, female/female.

    The cable has to connect two boxes, that are equal. So both boxes show the same type of connector, say: male. Pin 1 is ground, pin 2 transmit, pin 3 receive. The cable must be of crossed type, bringing pin 2 to 3 and 3 to 2.
    If you configure the male/female cable to bring pin 2 to socket 2 and pin 3 to socket 3, you can connect as many pieces together as you like, the signal will just be fed through.
    To connect the boxes you need the same connector type on both sides of the cable, so you have to use at least one cable segment with same connector type. In this configuration the signals have to be twisted, what enters pin/socket 2 exits pin/socket 3.
    It turns out: it the cables are made like this, you can put them together in any order, if only the end connector fit the boxes, the signal transmission will be working.

    If you think: that is trivial and not a problem, you have never been in the field, searching for a broken connection, having at hand used cables you didn't make yourself.

    What do we learn from this: symmetry is a powerful concept in science.

    Now imagine the following:

    You have a number, e.g. 413, of sampled data in a row. And you realize: the first value is equal to the last, the second to the last but one and so on. Value 207 is equal to itself. Next you have another row of 413 data samples, but in this case the last value is the negative of the first etc.

    I bet: whenever you multiply all values in one row with the corresponding value in the other row, and you sum up the results, you will end up with zero.

    The reason is obvious.
  • ErNaErNa Posts: 1,742
    But what has this to do with a fourier transform?

    What did we do? We calculated a sum of products. What if we see the sampled data as the coordinates of a vector in a space with 413 coordinate axis, all perpenticular to each other. OK, I admit, that is hard to see, so we imagine.
    In this case we calculated the scalar product of two vectors and the value was zero. That means, the vectors are orthogonal.

    In the next experiment we again have a first row of sampled data ( a vector) and an artifical row of numbers: one period of a sinosoid. And the amplitude of this sinusoid is scaled the way, that the sum of the squares is one. We call this vector normalized.

    A simple example: a first vector ( 0, 1, 0). The length is 1. We can identify this vector as a vector in space: X:0, Y:1, Z:0, so pointing in direction Y.

    Let us have a second vector: (.3, 7, 4). How much of the first vector is "contained" in the second?. We calculate: 0*.3 + 1*7 + 0*4 = 7. As we see, the second vector extends seven units into direction Y. This looks to be trival, but it ain't necessarily so.

    Now let us determine how much of the sinusoid is in the measured signal.
    We just calculate the scalar product between the vector signal and the vector "sinusoid", and that's it. Now we know how much of the frequency 1 relative to the signal lenght it contains.
  • ErNaErNa Posts: 1,742
    edited 2015-08-31 08:24
    Calculating the scalar product between signal U and vector "sin (1)" is, multiplying Ui * sin(1)i for i from 0 to n-1, n= number of samples in the intervall and summing up.

    U0 * sin(1)0 + U1 * sin(i)1 + ... + U(n-1) * sin(1)(n-1) + Un * sin(i)n.

    Now we have to be watchful:
    When doing a frequency analysis, we first determine the period of a signal and then we analyse a single period. So as in reality there is no absolutely periodical signal, we just do our best to determine a periode. In that moment that very period determines the signals past and future. The moment we say "this is one periode of a signal" we automatically determine, that the signal will be continued by replication.

    Looking to the symmetry of the sinusoidal: the first value (index 0) is equal to the first value of the next period (index n), while the last value (index n-1) corresponds to the second value (index 1).

    Now let us have a signal period of 8, showing 1 period of a sinusoid.
    index, degree, value

    0 0 0.0
    1 45 .707
    2 90 1.0
    3 135 .707
    4 180 0.0
    5 225 -.707
    6 270 -1.0
    7 315 -0.707
    ---
    8 0 0.0 etc

    We see: there are only 3 values needed to describe a complete sinusoid with 8 samples, only the sign changes now and then.
    But this is not a normalized vector, because the sum of the squares of components has to be 1. This is not the case as can easily be calculated:
    0.0 + .5 + 1.0 + .5 + 0.0 + .5 + 1.0 + .5 = 4
    So we have to scale all the components by 0.5 to adjust the vectors length to 1. The vector now becomes:

    ( 0.0, .3535, .5, .3535, 0.0, -.3535, -.5, -.3535)
    This vector represents a sinusoid of frequency 1 in the periodlenght of 8 as a base vector.

    Now let us take an arbitrary signal of period 8 samples and look how much of the frequency 1 is in it.

    The sample signal is:

    ( 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0)

    This represents e.g. a voltage ramping up. As this is one period, it is continued repeatedly and so wie have a sawtooth signal

    ( 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0), ( 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0), ( 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0), ...

    We determine the scalar product:

    ( 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0) * ( 0.0, .3535, .5, .3535, 0.0, -.3535, -.5, -.3535) = 0.0 + .3535 + 1.0 + 1.0605 + 0.0 - 1.7675 - 3.0 - 2.4745 = -4.828

    That means: a sawtooth of the given shape contains the frequency 1 at a value of -4.828.

    To calculate this we needed 8 multiplications and addition.

    But due to the symmetry in the sinusoid we could do an appreviation:

    first we add/substract the components of the signal that have the same absolute vale in the base vector:

    0.0 + 4.0 = 4 -> * 0
    1.0 + 3.0 - 5.0 - 7.0 = -8 -> * .3535 = -2.828
    2.0 - 6.0 = -4 -> * .5 = - 2

    Result: - 4.828

    Now we had 8 additions but only 3 multiplications!

    As this is encouraging, we are motivated to look, what' about the other higher frequencies?
  • ErNaErNa Posts: 1,742
    If we look at the first harmonic, we now have 4 samples for one period.
    The vector is: ( 0.0, 1.0, 0.0, -1.0, 0.0, 1.0, 0.0, -1.0), in a normalized form: ( 0.0, 0.5, 0.0, -0.5, 0.0, 0.5, 0.0, -0.5)

    First we check, if this vector, representing the frequency 2, is linear independent to the frequency 1 by calculating the scalar product:

    ( 0.0, 0.5, 0.0, -0.5, 0.0, 0.5, 0.0, -0.5) * ( 0.0, .3535, .5, .3535, 0.0, -.3535, -.5, -.3535) = 0.0 + .17675 + 0.0 - .17675 + 0.0 - .17675 + 0.0 + .17675 = 0

    Yes, they are, as the scalar product is zero.
Sign In or Register to comment.