View Full Version : Fourier for dummies - under construction

11-20-2010, 05:00 PM
Bean gave us CODRIC for dummies


and discussion arose of the need for Fourier for dummies.

Rather than punish Bean for his good work, it was suggested (by me) that the rest of us work together to create something using Bean as inspiration.

Heater can't see this working (blind leading the blind and all), so now it is a challenge, and the result will be decided by whether he or I or anybody else can claim to have gained an understanding of Fourier when work stops.

If somebody that already has a clear understanding of Fourier
could point out when we've gone wrong, that would be appreciated.

Otherwise, any interested party can post/contribute.

Bean's format looks like this to me:

a) one sentence high level explanation

b) first basic concept

c) example of first basic concept

d) next concept

e) example combining concepts

f) mostly integers

g) long strings of float only used when we are showing how we get rid of them:

"So what angle would give us Tan(Angle) = 0.500.
It turns out to be 26.56505118.
Now this isn’t exactly half of 45, but it doesn’t really matter.
As long as it is smaller than 45 and larger than or equal to 22.5 it will work."

I believe this last is the "magic".

So let's get started. There are a couple different places to start:

The whole idea is that a periodic function might be broken down into simpler components. Is this correct?
For example, a violin note might broken into constituent triangle and saw-tooth waves of different frequencies; Or a voice might be broken into constituent wave forms and the composition changes over time.

Edit: The goal - to understand this Fourier thing and how to use it on the prop.

11-20-2010, 05:43 PM
I make the observation that Bean pulled off his CORDIC explanation without the use of matrix notation or other heavy weight mathematical notation that you normally see in documents claiming to introduce the CORDIC algorithm(s). For example see the wiki entry on CORDIC http://en.wikipedia.org/wiki/CORDIC

As I said on the CORDIC for dummies thread, this confirms my belief that there are some beautiful, simple, elegant ideas in mathematics that almost anyone could understand if they could be explained without all the usual mathematical baggage. That is to say, the essence of the idea does not need all that confusing and mysterious symbology (is there such a word?)

So with Fourier analysis/transform and the DFT top marks to anyone who can explain the essential concepts the core of the idea without:

1) Complex numbers.
2) Integral signs.

I make another observation:

When you perform the Young's Slits experiment, as many here will have done at school at least, the pattern of light observed on the screen is the Fourier Transform of the geometry of the slits that the light travelled through. Perhaps they did not tell you that at school as in my case.

Anyway in that experiment the result appears on the screen, the light "does the trick" without any "imaginary" things going on.

Similarly when I calculate a Fourier Transform on my computer for sure there are no imaginary numbers in use. An x86 CPU for example does not have such a type.

My philosophical conclusion is that it is possible to make an exposition of the Fourier Transform and the discrete FFT without the use of imaginary numbers. And therefore the essence of the idea should be understandable by the mathematically unsophisticated.

Get rid of the integrals and exponentials and we are onto a winner.

11-20-2010, 05:59 PM
By the way, is the Propeller Chip forum the correct place for such a general thread?

11-20-2010, 06:58 PM

I enjoyed the Cordic material very much. I am one of those mathematically unsophisticated people.

The higher level concepts are easy to visualize for me, but not so easy to abstract into the higher level math forms. Early on in my education, I made the observation that computers do only a few things:

Add, copy numbers around, and operate on them logically. That's it!

Bean's explanation is powerful because it hits home at that level, and it's very enlightening, and relevant to the Prop because of where it is at math wise.

There are lots of basic math shortcuts, all leveraging numerical compliments, modulo, and other core things seen in computers regularly, but not in ordinary math as I see taught.

Getting to that level with more complex ideas is very useful, not just to programmer types, but to ordinary people too. I would vote to encourage this kind of thing here, because of that.

Recently, I was having a math discussion with a Stanford grad working with us. This guy knows his higher math, and often teaches me things I missed out on, largely because I got side tracked on manufacturing and computers at that time in my life. He went into materials science and the physics of that as related to people and the world they live and play in. (they engineer things like sports wear, boat hulls that move through water more efficiently, wake and snow boards, etc...)

Just the other day, both of us were talking about kids and how to show them math concepts. Simple things like: -5x+x-6x for example. What's going on here? To me, it's just a few adds. (-5x) + (+x) + (-6x). We were discussing "combine terms", and just what "combine" means, and it means ADD, just by way of one example to highlight what I'm talking about. The same is true for something like 3-5=-2 It's just (+3)+(-5), and when only adding, young people can get the sign rules right away, where if subtract is also done, they have to remember two sets and get that right! (worked wonders with my kids)

Anyway, we've been working with some powerful simulation software that I regularly use, support and demonstrate to people. I don't get all the math behind that software, and he does. It's been very interesting to break the core things down to what the computers really do. That is very, very different from the higher order math used to derive what to do in the first place. We met because I've modeled many of the things he does over the years, which I do well, and he's crunched the numbers by hand, vetting simulations on computer.

We ended up with "math purists" and "math geeks", vs, ordinary people and practical, applied math discussion, and the difference between the purists and the engineers. The purists build tools, which the engineers then apply to get stuff done. Very different things, both necessary things. Ordinary people work more like the engineers do, when they have the tools expressed in ways they can grapple with.

We have a solid population of ordinary people here, which speaks to the relevance.

IMHO, this kind of thing is just as important as the higher order constructs are, so that we can merge theory, our understanding of the physics, and realize applications and engineering and computation on that to our mutual benefit.

Sorry for being wordy, the core message here is "please do", and I just thought a moment talking about why made sense. Scroll me, if you need to.


Edit: I have a idea for breaking down complex sounds into components floating around in my head for years. One day, I would like to code on it, apply some pattern matching ideas, and explore how we break down complex, combined sounds, and a better understanding of Fourier would help. Deffo interested. Props might not compute fast enough for it to be useful in real time, or on large datasets, but would be a good prototyping place to begin.

11-20-2010, 08:48 PM
Hi Heater.

I say same as potatohead ---- IMHO, YES!. --> But maybe add some extra comments.

If possible Give some example code that can run in SPIN for Beginners.

Ps. That can give Beginners good info how to start mathematic calculations in SPIN on Propeller

By the way, is the Propeller Chip forum the correct place for such a general thread?

11-21-2010, 07:11 AM
Here is a guy who has done a lot of work on this already.http://www.katjaas.nl/home/home.html

OK he goes head first into the complex plane and I have no idea how understandable it is yet but it all looks very beautiful.

11-21-2010, 02:29 PM
This is a very interesting challenge, because there is a basic operation: "butterfly" in the FFT and is not clear to me, why this, when repeated, shifted, ... creates a frequency spectrum from a spacial or time function.

OK, anyway, there is a discussion if Fourier needs matrices or better to use complex numbers. I do not like complex numbers very much, because there is the "imaginary" part, that is somehow frightening . I prefer the vector approach.

With an infinite number of dimensions, of course.

11-21-2010, 03:05 PM
Here is a guy who has done a lot of work on this already.http://www.katjaas.nl/home/home.html

OK he goes head first into the complex plane and I have no idea how understandable it is yet but it all looks very beautiful.

Whoa! THERE'S a guy who cares!

11-21-2010, 03:09 PM
OK, anyway, there is a discussion if Fourier needs matrices or better to use complex numbers.

I do not like complex numbers very much, because there is the "imaginary" part, that is somehow frightening .

This is good information.

If there it is easier to without imaginary parts, then the choice is easy.

However, I like the imaginary route, BECAUSE it is scary.

11-21-2010, 03:48 PM
To me the key to fourier transform is vector algebra. We know: if there are two vectors and the scalar product is zero, then these vectors are orthogonal. If the scalar product of a vector with himself is 1, then this vector is called to be normalized, his length is one. The fascinating thing is: we all have problem to imagine vectors with more then three dimensions in space, but why do we care? We have an array of lets say 40 elements, a second one, we multiply component for component and sum up, if the result is zero, these two arrays can be seen as representing 40-dimensional vectors, which are perpendicular. That is just a convention and the question is, is there an application.

Now we can easily verify by just sketching a sinus and cosine of the same frequency, that they can be seen as orthogonal vectors. Because when you multiply sin*cos from left and from right, one of the products will be positive, the other one negative, but have same value, so there sum is zero and this is true for all values from the left or the right to the middle. q.e.d.

11-21-2010, 05:47 PM
Here is a guy who has done a lot of work on this already.http://www.katjaas.nl/home/home.html

OK he goes head first into the complex plane and I have no idea how understandable it is yet but it all looks very beautiful.

That site is beautiful. I've read through some of his "beginner" work to help people understand DSP related concepts. It's excellent! I have a lot of trouble at that level, and found his explanations of matrices and complex numbers as rotations understandable. Love the style. Personification is always fun, and I'm a fan of it personally.

Thanks for linking that diversion.

One rather profound bit of insight gleaned from his pages is curvature = frequency. There is a photo where he's got a decaying resonator detailed, with scope plots, and it's very clear that radius on the X-Y plot = amplitude, and curvature equals frequency, with rotation being time. Never understood that, or what "negative frequencies" were, until spending more time on that site last night than I care to admit.

11-22-2010, 12:21 AM
FFTs (and DSP in general) are a fascinating topic.

If you're interested in a simple, clear explanation of FFT that focuses on concepts and not math, I highly recommend Steven W. Smith's book (available online for free):

Chapter 12 (http://www.dspguide.com/ch12/2.htm) covers FFT.

Also, I have cataloged all the prop-based FFT solutions that I'm aware of in this thread (http://forums.parallax.com/showthread.php?t=118111).

11-22-2010, 06:20 AM
One remark about complex numbers:

Just: there is no complex number. If you know this, there is no need to know, what "complex number" means.

But if you are thinking about fourier transform, you should be aware, that you are always working with pairs of numbers.

Let us use different words and you may understand, what I mean: All man are created equal, but there are men and women. Complexity comes from being married, therefor: complex number.

"Normal" numbers can be seen as singles, but they are just pairs of numbers where the second half is 0 and always zero and therefore is not written down, just to save time.

If you have a time depending signal, that is, just an array of numbers, and lets say: 512 of them, then you must input into the fourier transform a second array, where all element are equal 0. That does not mean, that they do not exist. So the input consists of 1024 value.

The output again is 1024 values. But you can not have nothing from nothing. So, how can you make 1024 output values from 512 different input values?

The point is: you do not get 1024 independent values, but 512 * 2 values, every value comes twice, the spectrum is symmetrical.

What you have to understand in doing the FT is not the algorithm, this is just mechanic. It is the meaning, the "philosophical" background.

Someone said: the greatest miracle is not, that there is math, but that math is related to real things. A FT allows you to see, if there is something repeatedly happening. As I have showed above, the scalar product of sin and cos in the range of 0 - 2pi is zero, because of the symmetry. The same is true for the scalar product <sinus|constant>. If you multiply a sinusoidal by a constant and you add together all the values (equidistant, no matter how many), you will always have a result 0, independent of the frequency of the sinusoidal, if only the range is a multiple of 2Pi. That is, if you have a fundamental frequency in a range of 2Pi and any number of harmonics, then the scalar product between these and a constant will always be 0: a DC-Voltage has no frequency spectrum. Or, as we say, to be consistent: The frequency is ZERO ;-)

Beau Schwabe
11-27-2010, 05:09 AM
If you can follow it, here is a good You-Tube video reference to FFT ...

Part1 http://www.youtube.com/watch?v=ObklYbQaX24
Part2 http://www.youtube.com/watch?v=QO3kgwYzpZg&NR=1
Part3 http://www.youtube.com/watch?v=6-llh6WJo1U&NR=1

11-27-2010, 09:07 AM
I did not follow the video to the end, because already the first after a short time presents this equation
(voice: another way to represent the signal is by the sum)

f (t) = SUM (f (tn) * delta(t-tn) )

and already here I see an error:

f (t) = SUM (f (t) * delta(t-tn) )

because it is the delta-function, which cut the value f(tn) from the function f(t)

And there is a short way from some to integral form, what in reality is a complex mathematical transaction.

So to me it seems not useful do watch the whole presentation.

By the way "for dummies" and political correctness: If I explicitly present something to "dummies", I want to show, that I found a way to have others take part in something fascinating. If my intention is, to keep them in the dummy state, I would just give a lecture on a very high, foggy level, everybody will be impressed and no one will have learned something.
I only read books for dummies. But I have problems to write them ;-(

11-27-2010, 04:10 PM
This thread is now one week old. It has attracted several participants, and many watchers.

Some have no idea about Fourier, what its used for, or how it works.
Some have a working knowledge of how to use it, but don't understand it.
Some have a very detailed understanding, but can't explain it very well to the target audience.

I suggest that Fourier for Dummies is not the simplest topic this forum has addressed, but we are making excellent progress. Already we see an issue between the references (thanks Beau!) and actual users that understand it (thanks ErNa!). Such issues would totally derail a new person, so it is essential to identify and resolve such issues.

Fourier is four concepts I guess, based on what I found in wiki:

1) Fourier the dead guy - who was known for checking out heat transfer and vibrations

2) Fourier series - decomposing a complex periodic function into its simple component functions.

In audio this is something like breaking a violin note into a sawtooth wave of the base frequency, and a bunch of sine wave at higher multiples tof that frequency. The cool part is the mixture of the higher frequencies (harmonics) makes a noise become a violin.

3) Fourier transform is the method to decompose a signal into it's component frequencies

4) Fourier analysis is breaking the complex signal into its components in terms of the sums of simple trigonometric functions

- - - - - -

So, that is a first stab at the first step of Fourier for dummies. Could someone correct or simplify the above? How do we change it to fit Bean's format for "for dummies"?

- - - - - -

a) one sentence high level explanation

b) first basic concept
c) example of first basic concept
d) next concept
e) example combining concepts
f) mostly integers
g) long strings of float only used when we are showing how we get rid of them

11-27-2010, 05:34 PM
The transform is hard to explain, because there is a basic understanding at start and this hinders getting the concept.

The key hurdle is: complex numbers and imaginary part. Introducing complex numbers eases calculation, but complicates understanding.

How to come around complex numbers?

We have to know, that whenever we add a sinusoidal function to another sinusoidal function, the result is a sinusoidal. And: a co-sinusoidal function is a sinusoidal function shifted in X-axis by 1/4 of the period, independent what X designates, time, space, temperature, wellness. 1/4 of the period is synonymous to 90° or pi/2. Nothing else.

Now, as we add to sinusoids the resulting function is a sinusoid, but with different amplitude and different phase (in general), depending on phase shift and amplitude of the original functions.

Further we have to know: if there is a given sinusoid with amplitude and phase shift, that is, the raising slope crossing x-axis (that is: y = 0) not at x=0 or n*2Pi, then we can always find a sinusoid with phase shift 0 and a sinusoid with phase shift 90° (which we call co-sinusoid for simplicity) with given amplitude, that in sum form just this first shifted sinusoid.

So, that is the reason, why we introduce pairs of numbers to express a shifted sinusoid: we decompose this to two sinusoids with special properties: the first is not shifted in phase, the second by 90°. From this we get some advantage in calculation, but not in understanding. Understanding gets more complicated.

Now, why complex numbers? First we only have pairs of numbers, the amplitudes of the two components sinusoid and co-sinusoid. But, as we know, that the x- and y-axes of a every point of a circle can be determined by sinusoid and co-sinusoid of a given angle with leg length equal to the circles radius, we inversely can express every point of this circle by an x- and y-value. We now omit the x and y and replace x by nothing and y by i.

So, what we do is: we have an original function of sinusoidal form with a given phase shift and express this function by a sinusoid of a given amplitude (with phase shift 0) and a co-sinusoid of a given amplitude (also with phase shift 0, but this is a sinusoid with phase shift 90°) and just because there is some advantage in calculation, we write the to numbers not as a vector (x,y) (that is a modern interpretation of having pair of numbers), but as to numbers separated by a + sign and to discrimitate y form x, we place a signature "i" to the y-value. Thats it, and there is nothing "imaginary"

11-27-2010, 05:57 PM

Below is the text of a post I wrote yesterday and was a bit nervous about actually posting because I can see it's going to require some serious effort and I'm not sure I have the writing and organizational skill to pull it off. But given you last post, here goes...

Here are my criteria for "Fourier For Dummies":

0) The reader has a basic idea of signals, samples, analog to digital conversions some programming, typical basic electronics/micro controller stuff.
1) The reader has heard of this magical Fourier Transform that can tell you what frequencies are in a signal but cannot begin to make headway on understanding the usual explanations and accompanying maths. Similarly any code he has seen that does a Four Transform is baffling.
2) The reader has a grasp of basic high school maths, sin, cos, pythagoras, averages etc.
3) The reader has little or no idea about higher maths, integrals, series summation, exponentials, complex numbers...
4) Fourier For Dummies will not use integrals, exponentials or complex numbers.
5) We may never get to the Fast Fourier Transform unless someone comes up with a simple way to continue the story.

Yesterdays thoughts...

Over on the CORDIC for dummies thread prof_braino came up with the idea of Fourier For Dummies (The FFD:)) and made this comment:

Any takers? I nominate Heater, he says he knows the Fourier part.

Well, thanks a lot Braino. With that single statement you have crashed my brain for a week:) Pondering how to go about creating such an essay, to the exclusion of everything else I'm supposed to be thinking about from time to time.

Well, the result is I think I have dreamt up a way to introduce the Fourier transform for sampled signals in such a way that the mathematically naive can get to an understanding of the core concept without any integrals, sigmas, exponentials, imaginary numbers or even pi. A high school understanding of cos and sin will do.

I'm not sure about what to do for example code. Doing this in Spin will be messy as Spin has no floating point. Perhaps C would do or just pseudo code. Or perhaps something like Python makes a good pseudo code for this.

If no one else has started on such a document yet I will make a stab at it. Probably dribbling out in small parts with an invitation for comments.

Beau Schwabe
11-27-2010, 06:50 PM
That's unfortunate, I thought the video, especially the last one bought things together. It's more of a reference anyway. :-(

I'm not claiming a vast understanding of FFT but I believe I understand the concept.

We live in the time domain... everything we do we understand as a reference of time. We eat, sleep, etc. at regular intervals that are periodic in nature. If you were to 'stack' all of the periodic events that happen throughout a day it would become a mess and difficult to see all of the superimposed waveforms. So we need another way to quantify this data. One answer is FFT. FFT represents the frequency domain of the same data. Which means that a range of frequencies are represented from LEFT to RIGHT rather than a time interval. Since in FFT the period can be from +infinity to -infinity, you typically set a window or a sweep of frequencies and apply each period that represents the swept frequency to the entire data sample. These are the values that you SUM together and are represented vertically, while the next frequency in the sweep is applied to the data all over again and incremented horizontally. How you divide up the Period of the particular frequency that you are sweeping is up to you but be aware that the sample frequency (The rate at which the original data was sampled) will have a beat frequency against the rate at which the period was divided that will most likely show up as a component int he FFT output.

11-27-2010, 07:40 PM
To gain a deeper understanding of fourier transform the first step is: forget what you know. Start from the very beginning.

Fourier transform has nothing to do with sinusoids. There is no time domain or frequency domain. Nothing.

Now define an EVENT. Something happens. To have an event, there is a precondition: the event is not. Then it is. Then it is not again. For something to happen, we need the concept of time. I am not willing to tell, what time is ;-) . But I can tell a property of time: time allows things to change.

If we use the word "then" we automatically think in terms of time. But that is not necessarily the case.

Let me make an example:
you are walking a road and fall into a pit. You can describe this from view of time: there is no pit, there is a pit, there is no pit. But also from the point of space: the road is ok, there is a pit, the road is ok.

Have mercy: I do not find another example.

The same scenario, but a row of people is going side by side down the road, only one falls in the pit: this single pit can be seen in space and time: In space: only one of the man "finds" the pit, in time: it doesn't happen always, only at a given moment.

Now we have a single "event" arranged in space and time.

If now the row is walking down the road and a man falls into a pit more than once, we say: the event is repeated in time. If in one moment more than one man falls into a pit, we have an event repeated in space. And we can have this in a regular or stochastic way.

The fourier transform gives us numbers, which describe the existence of the pits in time and space.

Now, let us have a linear row of pits crossing the street. A row of men walking down the street will see a event in space, because they all reach the pits at one moment, but a number of them falls into the pits.
The same row of men crossing the street will find a event in time, because only one will fall into the pits, all the others will go left and right.

So, in this simple example I wanted to introduce the concept of space and time and we also see, that space and time may be swapped, as the environment is detected by experience.

So often people discuss the 4 dimensional space time. This is to me misleading. We are existing in time and space and time allows us to change our state, for example, we are able to learn, and space allows us to be not alone, because two objects can not exist in the same place and if there is no space, then there only one object possible.
Why does it make sense to have only one dimension in time: You can not change a state in two ways and be still one. See Jekyll&Hide. If time had more than one dimension, you would just deliquesce. But if space would have only one dimension, you just could not pass along the street. Event two dimensions are a little insufficient, because you have no chance to gain potential energy. The third dimension solves this problem, and some others too: you can build a town and lay in your bed alone.

This contribution doesn't use imaginary numbers, but some imagination, so: it is persistant in time and space and can be read in time and space.

By the way, there is a story: a famous, old director from the cologne Gürzenich orchestra loved to introduce modern music to the audience. One day there was a premiere and only buhhs. So he turned to the audience and said: As I see, you didn't understand the music. No problem, we do the performance again! And: DaCapo!

11-27-2010, 09:24 PM
You guys are excellent!

@heater - thank you very much. I did not intend to saddle you with writing the entire document, although the rest of us surely benefit if it works out that way. I'm hoping your "first stab" is a step towards a community effort, you of course have other fun projects to tend to.

@Beau and ErNa - this is really good: two perspectives, one from the math, one from the application of the concepts (I think) :) . In any case, they need not be viewed as contradictory; the task is to determine how both fit into the discussion to benefit the explorer. This is the really tough part, determining what will make the light go on in the reader's mind.

At the moment I can't understand either set of explanations, nor most of the explanations on the web. I'm sure many of the rest of the folks watching feel the same way. If folks could respond noting when "the light switches on", that will be extraordinary benefit.

This is a really cool experiment!

11-28-2010, 12:45 PM
I will give an example for a fourier transform.

Imagine, you have a delta distribution, often called delta function. Why the two words? In a mathematically exact sense, there is a difference, because a function somehow is well defined, while a distribution just has some properties.

Imagine you have a gaussian function, this function starts from -infinity, but the y value is very very very small (indeed, no function decays faster than the exponential e^-x, and the gaussian is this function squared, so reaches a value of 1 for e^0 and decays to the + and - values, the point of inflection is reached at +- standard deviation.

This function is well defined, and also well known. So, if we modify the equation to have a variable standard deviation, and if the standard deviation goes to zero, than the function becomes much higher and narrows the way, that the area under the graph stays constant. In the end, at zero width, the graph reaches infinity as a value. But this is not a valid situation in (normal) math.

Now the distribution comes to the scene: The distribution doesn't have a discrete shape, but a property. The property is: the y-value of the distribution is zero, except in the place A. In this place the value is not defined, but somehow infinite. The property is: whenever you are determining the integral of this distribution is calculated, starting from - infinity, the value stays 0 until the x-value A is crossed. That very moment the integral raises to 1 and stays at 1 even when integrating up to +infinity. That is what is called a property:
Whenever you come across A, the integral becomes 1, no matter where before A you start and where behind A you stop. Even if this distance is infinitely small.

Now imagine: if you multiply a given function with this distribution, you run into a problem! How to do this?
The solution is: we talked about the gaussian, which doesn't change the integral, independend on width. So if you are doing the limit transition to a with of zero, the gaussion shows the same property, a delta-distribution has. (and a delta distribution is, what I described above). So we substitute the distribution by gaussian with standard deviations -> 0

Now we multiply a function with the gaussian and integrate and what we get as a result is just the y-value of the function at the place A. The gaussian is just a cutting knife. Sometimes called a Sampler. The physical equivalent is the sample-and-hold in front of an ADC.

I think, this is another building block to understand fourier transform.

11-29-2010, 05:32 AM
Now with such a "cutting knife" available, we can do this experiment:

Let us say. we place the knife at x=0 and we take as a function to inspect a sinusoid. As sin (0) = 0, the knife will always return 0. Independent of the sinusoids frequency. If we take a co-sinusoid, the result will be 1.

If we now move the knife slowly to positive x values, we will see, that the first output grows, while the second decreases. But as we may know (google for "as we may think"), the sum of the square of sin(x) and cos(x) is a constant and so is the square root, we call both numbers, which are related, with a new word. Two words are common: complex number or 2-dimensional vector.

Again I point to this: as we do this, we "imagine". Thinking, doing math, etc always creates only a picture or model of reality. It never is real.

11-29-2010, 07:54 AM
Now we find a solution for the equation 1 = 1

Seems to be a silly doing, but isn't. It is pure science ;-)

Let there be an interval on the x-axis, from start to end. And let us separate this interval into a number of chunks, lets say 1024. Or whatever.
Now we create us 1024 delta distributions that way, that they cover the interval. The first is
1, 0, 0, ... the second 0, 1, 0, ... , the last 0, ... , 0, 1

Let there be an array of 1024 numbers, representing the graph of a function.

If now we multiply the first delta distribution with the function the way, that we multiply chunk for chunk and add ( this is called a multiply-Accumulate or MAC-operation) we get as a result the first value of the function.
If we multiply with the second delta distribution, the result is the second value of the function, and so on.

In doing this we create 1 number from 2048 numbers!
Is there any question why 2048 and not just 1024?

11-29-2010, 01:02 PM
I have heard your cries...
I am working on a Fourier for dummies tutorial.

Give me a couple days and I'll post something to get us started.


11-29-2010, 03:37 PM
Take a look at this starting tutorial and let me know if it makes sense.

Please comment, together we can make this a great tutorial.


Beau Schwabe
11-29-2010, 04:18 PM

This is nice. This is a very similar to the approach that I would take.

It should probably pointed out that there are many distinct FFT algorithms involving a wide range of mathematics. The type that is used in the example here is DFT (discrete Fourier transform) which decomposes a sequence of values into components of different frequencies.

The example quickly becomes more complex when there are several frequencies involved. The attached graph below represents : 2500kHz, 3250kHz, 3750kHz, and 4500kHz ... The 'Blue' represents the data you would be looking at which is the SUM of the frequencies mentioned above.

11-29-2010, 05:13 PM
It surely is different from the way, I try to go, but in the end it makes sense to come to a place from different directions.
My experience is: the moment you understand the math, you are satisfied and later, when some border condition become relevant, the algorithm doesn't show the expected results and now it is difficult to find the reasons.

There are a lot of tools in the net with allow to do the math, some for free, some expensive. I propose to download switcher Cad for LT, a spice version, free and easy to use, in my opinion. Allow schematic entry. This spice version can do and display FFT and so someone could create an example and share it.

Dave Hein
11-29-2010, 05:14 PM
It should probably pointed out that there are many distinct FFT algorithms involving a wide range of mathematics. The type that is used in the example here is DFT (discrete Fourier transform) which decomposes a sequence of values into components of different frequencies.

The example quickly becomes more complex when there are several frequencies involved. The attached graph below represents : 2500kHz, 3250kHz, 3750kHz, and 4500kHz ... The 'Blue' represents the data you would be looking at which is the SUM of the frequencies mentioned above.

All FFT algorithms are mathematically equivalent to the DFT -- They're just faster ways to implement the DFT. The basis vectors can be either viewed as a complex number pair or a 2-D quadrature pair, but in either case they are sines and cosines of different frequencies.


11-29-2010, 06:06 PM
There is a little difference, as a FFT runs only with powers of two and calculates the complete spectrum, a DFT sometimes is applied with some advantage, as this allows to search for a given frequency and the number of samples per period is free.
There also is a way based on prime numbers, that algorithm I learned to know when using transputers, but do not remember what the reason was to use this method.

But I want to make one additional remark:

It is said, a fourier transform determines the frequency spectrum of a signal. This is not truth at all. OOOOO

Imagine you have in interval in time or space and you have a defined signal, lets say: exactly one "oscillation" of a sinusoid. You determine the frequency spectrum of this signal to : amplitude 0 of all harmonics, and 1 for the fundamental.

Now you change the interval a little bit without changing the signal and you determine the frequency again. Now the fundamental frequency of the spectrum is a little bit different from the original, so the original frequency doesn't exist any longer. But you didn't change the signal. oooooo

Dave Hein
11-29-2010, 07:46 PM
I think the prime number algortihm is considered to be an FFT also (Fast Fourier Transform).

The DFT can be used to compute the power spectrum of a signal, but the signal must follow the assumptions of the DFT. The N samples of a data vector are assumed to repeat over and over to +/- infinity.

A single oscillation of a sinusoid in N samples actually represents that sinusoid oscillating from -infinity to +infinity. If the interval is changed slightly it now represents a different signal with discontinuities at the block boundaries. This will have a different spectrum.

11-29-2010, 08:18 PM
This pdf is what finally turned on the light for me and gave me an intuitive understanding of what is taking place.
There are a lot of other good tutorials here.

11-29-2010, 09:53 PM
@Dave: Yes, you are right. And that is exactly, what people often do not realize:

If you select an interval, you select a infinite number of such intervals, one following the other. So the selection of the interval is most critical.

What I've shown was: one period of a base frequency determines the interval. And so all harmonics are fix. Now it is possible to replace a given signal in the interval by a sum of harmonics (phase shifted sinusoids) to be base frequency.

But what if we have a signal combined from two sinusoids 1Hz and 1.3 Hz? We just say: this is a signal added 1Hz and 1.3Hz. But we could never make a simple Fourier transformed spectrum of 2 frequencies.

So, this we should be aware of.

A signal is a signal and in an interval this signal can be replaced by a sum of sinusoids and doing so, as sinusoids are defined - to + infinity, we imply that the signal also is periodic in this range.

In practice, this is never the case and so we generate artifacts, where we do not understand the origin, we try to eliminate them with much effort and the systems become more and more complicated.

Anyway: we are able to cancel noise and echo today, there is ogg and mp3, jpg etc and they are live from ft. If he would have known this, he would have been proud.

11-29-2010, 10:10 PM
But what if we have a signal combined from two sinusoids 1Hz and 1.3 Hz? We just say: this is a signal added 1Hz and 1.3Hz. But we could never make a simple Fourier transformed spectrum of 2 frequencies.

I don't understand this statement ? Why would a 13 second sample window not work ?


11-30-2010, 03:09 AM
Take a look at this starting tutorial and let me know if it makes sense.

Please comment, together we can make this a great tutorial.


The first paragraph is good. The rest is also good, but there needs to be more foundation of simple concepts between the first paragraph and the following paragraphs. Or the pictures start to look like spaghetti (very scary, orderly spaghetti).

OK, I'm going to make guesses here:

The CORDIC paper started with a really good first sentence. Then it got a little hazy, until the guessing game, where we could find something to latch onto. I ended up come back to the first page to figure it out after thinking about the examples. So I guess most of the fist page could have come AFTER the first two guessing games, maybe later.

The fourier paper has the same structure as the CORDIC paper, but does not have the "guessing game" examples. So there is no familliar-ish concept to align my perspectives. Yes. This is the difference, and maybe the key. At least in my case.

I want to see what heater is coming up with, he seems to understand this perspective.

Man, there's so much here, and its so close. Its like ALMOST being fluent in a new language, but being unable to catch the conversation at parties.

@ErNa - your explanation is obviously spot-on if I could only grasp it, but I can't follow it even when I take off the "new student" blinders. Your train of thought is way over my head at this point. AND I know it shouldn't be. There is some key link missing.

Considering how the rest of civilization has been trying this same task on and off for 200 years, I think were making amazing progress.

11-30-2010, 09:42 AM
A complex waveform may be treated as being composed of a number of sinusoid waveforms. These sinusoids are of various phases, frequencies and amplitudes.
Here we see, that it is very difficult to give a precise statement, even when not writing in a foreign language. various phases, frequencies, amplitude groups these three properties to one level. But there is an important difference:
Frequencies can only be multiples of the fundamental frequency, whereas phase and amplitude may show any value. If not there is a sample mechanism, in this case, phase only can be shifted in increments. And even then: is the analog amplitude digitized, in byte, word, long, floating point, or just a analog value? And: is there a physical behavior, that is not quantized? And does the quantum correspond to the digital representation? OK, we see: a lot of questions and there is no simple solution.

But can we gain understanding of FT independent of implementation?

@Bean: 13 seconds show 13 periods of 1 Hz, but 16.9 of 1.3 Hz. but 10 seconds would do: 10 periods of 1 Hz and 13 of 1.3 Hz. So, to be a devils advocate, we take 1 Hz and Pi Hz. Now I think it's harder to come around, if there is a solution, I sponsor a pie next UPEW!

11-30-2010, 11:47 AM
Hey guys - idea:

The transition point from the simple example to the actual math. Managing this transition is key.

All the material given so far great, but its like too much water on dry ground.

Bean's "Guessing game" introduction of the concepts of binary search, and special case where addition can be used to achieve multiplication, and THEN presenting the math that ties it together into something usable; this is the formula I image will lead to success.

Since Fourier is one of the more complicated concepts, maybe it would help to take a step back and try a simpler example as another warm up? (unless somebody now has enough to go on and produces magic).

I'm going to start Scaled Integer for Dummies. That might give us another point of perspective to help with Fourier. I hope we won't have too many plates spinning once.


11-30-2010, 12:33 PM
@Bean: 13 seconds show 13 periods of 1 Hz, but 16.9 of 1.3 Hz. but 10 seconds would do: 10 periods of 1 Hz and 13 of 1.3 Hz. So, to be a devils advocate, we take 1 Hz and Pi Hz. Now I think it's harder to come around, if there is a solution, I sponsor a pie next UPEW!

Oh, right. I see. Thanks for the clarification.

@Braino, Thanks for the feedback. The Fourier pdf is really just the broad approach right now. But I'll take your comments into consideration.


Dave Hein
11-30-2010, 01:27 PM
So, to be a devils advocate, we take 1 Hz and Pi Hz. Now I think it's harder to come around, if there is a solution, I sponsor a pie next UPEW!
Take a look at the PDF in RickB's post. It describes how signals are handled that have discontinuities at the block boundaries. The key is to multiply the signal times a window function. The window function goes to zero at the block boundaries, and it eliminates the discontinuities. The window function tends to smooth out the frequency spectrum, but it makes it possible to handle real-world signals.

I would suggest that the "Fourier for dummies" document should start with a description of the Fourier series. It should then introduce the basics of sampling theory and the Nyquist frequency, which is the highest frequency that can be represented by a sampled signal. The impulse, or delta function, which is a single non-zero point in the sampled domain is actually a sin(t)/t function in the continuous domain. This is a result of the fact that the signal must be band limited below the Nyquist frequency.

11-30-2010, 03:17 PM
As I said: it is not so easy and there are different ways to reach rome. The trick with the window is often applied and after windowing, when much of the signals information is destroyed, a lot of effort are undertaken to bring the info back.
That is, what I called "artifacts" above.

There fourier concept is not complicated, if only we gain an understanding, what it is used for and what the principles are.

And we should not discuss, how to implement it in the first step, because FFT is so tricky that after understand it, the question stays: how to invent such an algorithm.

OK, just a question: is anybody here, who downloaded LTSpice from http://www.linear.com/designtools/software/?

If not: can anybody explain, why he is anxious to do so?
I made a model of the Propeller ADC and could share the circuit. And I could share some files with signals, where everybody could test what fft does.

Ed T
11-30-2010, 07:07 PM
Here is my simple attempt (for simplicity practical details are left for later):

1) You can find the amount of a particular frequency in a waveform by multiplying the waveform by a sine wave and a cosine wave at that frequency and looking at the DC component of each result. Those 2 numbers (the DC value of result of the sine multiply and the the DC value of the cosine multiply) fully describe the content of that frequency in your waveform.

You can physically do this with an analog circuit:
a) Generate a sine wave and a cosine wave at your frequency of interest
b) Make 2 analog multipliers and input your waveform and the sine into one and the cosine into the other.
c) Put a DC low pass filter on each output. The combination of the DC values out of each fully describes the content of your waveform at that frequency. (Details farther down for how)

2) Change the frequency of the sine wave and the cosine wave to see the DC values at other frequencies. Better yet, have a Prop slowly change the frequency of your sine and cosine and have it record the 2 values at each frequency. You now know the frequency content of your waveform over a range of frequencies, ie the frequency spectrum of your signal.

3) This process is reversible. That is from the frequency spectrum of your waveform you can recreate your original waveform in time.

***Anything more than the above is details or practical concerns******

4) If I can describe my waveform mathematically as a function of time then I can do the above using math and the frequency spectrum could be a function itself (of frequency). This is the fourier transform.

5) The math for the above uses imaginary numbers and e^jwt as a convenient (not necessarily easy to understand) way represent the sin and cos components and there relationship.

6) In the digital world the above is done with a Discrete Fourier Transform.

Key Details:
1) How do the 2 numbers fully describe the content at a particular frequency? The content at a particular frequency is a unique sinewave at that frequency. To uniquely describe a sine wave you need the phase and magnitude in addition to the frequency . These can be calc'ed from our 2 numbers we recorded at that frequency. The magnitude will be the sqrt of the sum of the squares of our 2 numbers for that frequency, and the phase is the angle whose tangent equals the cosine magnitude we recorded divided by the sine magnitude. This is why we need to get the DC value for both a sine and cosine multiply so we could get the magnitude and phase.

2) What do we mean by DC? Well technically the DC value is the average value over all time. Forever being a little too long to wait, if our waveform is repetitive we can look over a time that is longer than both the period of our waveform and of our sine wave or better yet a multiple of both. That will give an answer that is close.

Other Details
3) Others details not answered but needed to implement: How fast do I need to sample in the digital world?, How high and low do I need to go in my frequency sampling? How much should I increment my sine and cosine frequency when taking samples? and more.

- Ed

12-01-2010, 05:43 AM
Here is my simple attempt (for simplicity practical details are left for later):

1) You can find the amount of a particular frequency in a waveform by multiplying the waveform by a sine wave and a cosine wave at that frequency and looking at the DC component of each result. Those 2 numbers (the DC value of result of the sine multiply and the the DC value of the cosine multiply) fully describe the content of that frequency in your waveform.

- Ed
As you said, this is a simple attempt, and I want to point with help of a microscope to a weak point:
There is no information, WHY those 2 numbers are sufficient. And that seems to me basic.

The impulse, or delta function, which is a single non-zero point in the sampled domain is actually a sin(t)/t function in the continuous domain.

This function shows to a certain degree the properties of a delta distribution, but isn't the function. The delta distribution is always positive, sin(t)/t not. That is a significant difference.

So we have to show, what makes fourier series unique and what follows from this.
Understanding FT is not a like watching a movie, and the first scene shows you the end.

12-01-2010, 08:57 AM
Fourier has a lot to do with symmetry. Symmetry means: things are somehow equal, but somehow not. http://neondata.com/p/TestFourier.html

Dave Hein
12-01-2010, 02:54 PM

[QUOTE]The impulse, or delta function, which is a single non-zero point in the sampled domain is actually a sin(t)/t function in the continuous domain.

This function shows to a certain degree the properties of a delta distribution, but isn't the function. The delta distribution is always positive, sin(t)/t not. That is a significant difference.


It appears that you misunderstood my statement. In the discrete time domain an impluse function is ..., 0, 0, 0, 1, 0, 0, 0, ... If this signal is sent out through a DAC and filtered with an ideal low-pass filter with a cutoff frequency at half the sampling frequency it will result in a sin(x)/x waveform. Specifically, the waveform will be

sin(PI*t/T)/(PI*t/T), where T is the sampling interval.

Note that this function is zero when t = n*T, where n is an integer, except when n = 0, where the function converges to a value of 1.

The frequency distribution of the discrete delta function is a constant value for all frequencies. The DFT of a delta function will produce coeficients with the same magnitude for all frequencies.


12-01-2010, 04:35 PM
Yes, communication is a problem. http://neondata.com/p/Acro0793.pdf

Every "real" function is not the delta pulse, because the delta pulse is not a function. But some functions have properties equal to corresponding properties of delta-dis.
Like: they are "alive" only in a small region, the integral spanning the whole x-axis is very close to the integral over the area of interest.

That real functions are only approximations, you can see that the sin(x)/ starts to oscillate before the 1 is passed to the DAC.

But not reality is an approximation of math, but math can approximately reflect the behavior of real things. ;-)

Dave Hein
12-01-2010, 05:04 PM
Well, technically speaking the sin(x)/x exists in the analog domain after the DAC, and not before. A practical low-pass filter has to be causal, and it can't oscillate before it sees the impulse. It would begin to oscillate when it sees the impulse, and would delay the signal depending on the number of poles and zeroes it has. And of course, we can only approximate the sin(x)/x function in practice.

It will be a challenge to develop a good "Fourier for dummies" document that is simple, and accurate. It might be best to introduce a few basic concepts and then provide links to other websites that go into more detail.

12-01-2010, 06:00 PM
Fourier for dummies <-> Fourier different

If you look to literature you mostly find extremely abstract paper, where you ask, from which start the author origins or just some papers, extracted from those first ones ;-))

"C'est pas une pipe" is painted on a very famous painting and from this we can learn: thinks are never what they look like.
A signal as a function of a variable does never contain any frequency. But you might say: under certain circumstances the signal we have acquainted is the same we get from adding some sinusoids with certain amplitude, a certain phase shift and a fundamental frequency just having one period in that x-range and the same set of parameters for all harmonics.

One question we have to answer is: why only harmonics, could it be better just one frequency and another one? That is, in my opinion, what ffd should deliver (ffd = fourier for dummys)

12-01-2010, 07:00 PM
Here is a simple DFT demo program.
It works, but it's not the fastest in the world (it's all spin).

I need to document it a little more for inclusion in the tutorial.

P.S. If you happen to have a harmonica it works great as a test sound source.


Ed T
12-01-2010, 09:11 PM
@Erna - I covered how the 2 numbers fully describe the content in Key Details #1.

My approach was to explain the core concept with as few details as possible. I have found when I am trying to learn something new, all the details can obscure the core concept. I have trouble distinquishing between what is the core and what is a detail.

In fact when I learned Fourier in Engineering school I could do all the math and pass all the tests but I did not realize what the core concept was. That was because I was overwhelmed with all the theorems,details and calculations (or maybe it was all the beer that overwhelmed me).

You are right that for a complete or more detailed understanding there is plenty more to understand. Also the dividing line between core and detail is certainly subjective.

- Ed

12-01-2010, 09:32 PM

Excellent stab at FFD. Just the approach I had in mind:)

Now that you have none that and what with the PDF linked to by RickB I will now bin my effort which was getting to wordy already anyway.

Now if we could only get from DFT to FFT in a few simple logical steps that would be amazing.

12-01-2010, 10:17 PM
Now if we could only get from DFT to FFT in a few simple logical steps that would be amazing.
Yeah, I would like to get to FFT, but I don't understand FFT myself yet. I think that is a good thing because I will be able to explain it better after I get it through my thick head.


12-02-2010, 05:56 AM

I have to read through FFD a few more times. But I have a couple of comments:

1) Perhaps the graphical examples should be using less samples, say only 16, just to make the pictures look simpler and less scary.

3) It should be pointed out that as we have only samples of the input wave form it is only possible to have the fundamental frequency and it's harmonics, up to the nyquist limit, represented by those samples, and therefore present in the output.

3) Whilst you demonstrate that multiplying the input sine wave by a wrong frequency "test harmonic" gives a zero result it should be emphasised that this is true for all test sine waves of the wrong frequency. I think this is a significant point to grasp. That's why this probing technique works.

4) I'm wondering if it is worth mentioning that whilst the algorithm described is very slow when wanting a complete spectrum as an output a simpler version of it can be used to detect a single frequency in a continuous stream of input. Basically a quadrature detector. As in the software defined radio for the Prop.

I also don't get the FFT very well yet.

12-02-2010, 07:19 AM

I notice that when describing how to detect a single frequency by multiplying an input wave by a sine wave and then taking the average of the result you divide by N/2 rather than just N. Without any justification for the 1/2 (well apart from just it works here). A reader may wonder about that.

Then in your example code I see you are only creating a spectrum output with half as many frequency bins as the number of input samples.

Ah, now I see why there is 1/2 used.

Normally when the DFT is described there are as many frequency bins in the output as there were samples in the input.

The second half of the frequency bins actually contains a mirror image of the first half (same values in reverse order). That means half of the spectral energy is in the second half of the output.

In your presentation you are ignoring the second half of the frequency bins and compensating for the energy loss by using the N/2.

What actually is in that top half of the frequency bins? Turns out to be the negative frequencies present in the input.

Is this important? I'm not sure but if anyone were to compare your DFT with the normal equations they would see 1/N not 2/N and perhaps get confused again.

A simple case:
1) Input is a sine wave, amplitude 1.0.
2) Frequency we test against is a sine wave, amplitude 1.0.
3) Multiplying these gives a sine wave, amplitude 0.5, but with a DC offset of 0.5.
4) Taking the average of that yields 0.5.

But our input was amplitude was 1.0, what happened to the other 0.5?
Turns out it is there in the negative frequency region.

What is this negative frequency, how is that possible?

Normally the DFT has as many output frequencies as input samples. BUT given N samples of input there only half as many frequencies it can represented. The Nyquist limit. So what is normally in the other half of the output? It's the other half of the missing energy but for negative frequencies. The negative frequencies are the same as positive, just generated from something that rotates in the opposite direction.

Ahhh !...Fourier gave us so much to think about. Before we even get to using the thing.

12-02-2010, 08:48 AM
The problem of facing a paradox:

What do we do, if we face a paradox? Mostly we find an explanation, no one can understand, but no own dares to admit.

A simple example, far from being complicated: two equal capacitors, one charged, one empty, connected via an ideal switch and wires without losses and if you close the switch, have of the original energy is gone. And now, where? This is discussed over and over and a lot of imagination is used to explain that.

OK. Here the ft related paradox: you have two different frequencies, let say 1 Hz and 1.1 Hz and we want to know, how much of one frequency "found in the other"

Now, who can give an answer to both questions?

12-02-2010, 11:13 AM

I think we have opposing philosophical views here.

On the one hand you make statements like "there is no frequency" and imply that there is some mystical paradox going on.

On the other hand I see a concrete reality of numbers going in and numbers going out of a computer.

OK, you could go further and say "there no numbers", just different states of transistors forming patterns in a silicon chip. Which we humans just happen to interpret as numbers.

One can make such statements but it's not very helpful. After all we did make those silicon chips to behave in a way that we like and that corresponds to concepts that we humans have.

Thing is Fourier algorithms do work on computers, we do get what we want. There is no magic and no paradox in the machine. It's a real machine working in the real world producing real results.

So, my thesis is that the operation of such an algorithm on such a machine is explainable without the mystical objects of higher mathematics, infinity, integral, complex numbers etc.

...paradox: you have two different frequencies, let say 1 Hz and 1.1 Hz and we want to know, how much of one frequency "found in the other"
I'm not sure I understand your meaning of "found in the other". But it looks like a trick question.

Reality is we have a fixed number of samples of some signal as our input data. In fact forget the signal, we just have a list of numbers representing values at equally separated points in time.

Given that list a few moments reflection will show that the only frequencies that can be represented by such a list are harmonics of the lowest frequency of which we can fit a whole cycle into our sample set. Up to the point where every sample has an opposite value, the Nyquist limit. And zero frequency of course.

So if we have 16 samples taken over one second the only possible frequencies that can be represented are: 0, 1, 2, 3, 4, 5, 6, 7, 8 Hertz.

Often the discrete fourier transform is presented as having the same number of output frequencies as input samples. In which case the output list represents amplitudes for frequencies 0, 1, 2, 3, 4, 5, 6, 7, 8, -7, -6, -5, -4, -3, -2, -1 Hertz.

Hmm...I think that's the right order.

"But what about frequencies in between those harmonics?" I hear you say. Repeatedly now. "My input signal may have those".

Sure your analog input may have those "in between" frequencies. But I maintain that the data samples we are working with cannot represent those frequencies.

In the same way that our input samples cannot represent those little wiggles that occur in our signal in between sample points, it cannot represent any continuum of frequencies either.

Indeed if you sample an analog input signal which is a sine wave at some "in between" frequency you will get a result with the energy shared among neighboring frequency bins in the output and probably some DC level as well.

12-02-2010, 01:02 PM
Yes, we have a different understanding and so anyone has. On one hand I make, on the other hand you see: how to compare this? There is no discrepancy, its just different.

You are right: numbers in, numbers out. But what has this to do with a frequency? What has this to do with FT? Numbers have a meaning and there are certain relations and a transform can make obvious, what is hidden.

What I showed above is: even numbers follow rules and are not just here.

If you have a number of numbers, representing a signal, you can see, there is already something confusing: number of numbers

Fourier works on a computer, but mostly you do NOT get what you want. You get what you want, if there is a certain problem from which you know, it can be solved using FT, but this is not usual. Only if the bunch of problems you want to solve consists of problems, where FT is a way to solve, you will be successful.

It is as if you say: the sky is full of stars. Yes. But stars are just spots in a dark and mostly the sky is black. Remove the stars, the sky will stay. Reverse?

Look: you say "Reality is we have a fixed number of samples of some signal as our input data. In fact forget the signal, we just have a list of numbers representing values at equally separated points in time."

But, what you don't say and what most people are not aware of: There is one additional condition: If you have n data points, there is a data point n+1 equal to data point 0, n+2 = 2 and so on, over all the times. Only then a FT makes sense.

If this would not be the case, you were never allowed to talk about "frequencies".

What I want to show is: under certain conditions something is true. If the conditions are slightly different, it is no longer true AND not false. Its nearly true. So we do not longer have a binary logic.

To understand fourier transform really, you have to know some rules about it. And one is: ft can express a signal represented by an array of numbers, where this numbers are seen as EQUIDISTANT samples of a function of an variable. The interval defined is one period of an infinite number of periods and only then you can express this interval as a sum of phase shifted sinusoids where there is a fundamental frequency covering 2Pi in said interval and n-1 harmonics.

The point is: there are no frequencies in between those harmonics. It is just this: lets say, the function is a sinusoid of one period in the interval. Than the fourier transform gives a spectrum 1, 0, 0.... 0, 0 , 1.

That is very simple, because there is only one 1 and all the rest is zero. But last element is also 1. Because you know, that the origin was a sinusoid, you know the last element is an "alias".
If now you change the input signal only for one sample, the spectrum will by <> 0 for all frequencies.

We can not decide, if the result comes from a real waveform or from a wrong selection of the interval. That is what I wanted to point to!

There is no frequency means: we define the frequency. We, the observer define, what "is".

We do not know, if the propeller really has 8 cores or if there isn't a single, very fast core emulation 8. Even if you open the housing, you see 8 similar patterns, but this could be mimicry. But there is evidence, that there are 8 cogs: the power, the prize and the spectrum emitted when the chip is running ;-)

Dave Hein
12-02-2010, 01:09 PM
The reason you can't have frequencies that are not multiples of the fundamental frequency is that the signal block repeats over and over to +/- infinity. This assumption is true for both the Fourier series and the discrete Fourier transform. Under this assumption, a non-multiple sinusoid is actually a more complex signal made up of multiple frequency components with a discontinuity at the block boundary.

You could use a block size that matches both the 1 Hz and 1.1 Hz sinusoids. However, in practice the block is multiplied times a window function to remove this constraint.

The spectrum does include N frequency values. For real-valued data the spectrum is symmetric around the Nyquist frequency (one-half the sampling frequency). In general, this is not the case for complex-valued data. Of course, in the real world we normally deal only with real-valued data.

12-02-2010, 02:39 PM
Frequency = unique series of amplitudes over time. Frankly, I think the concept of resonance needs to be applied here. Some resonance between the math construct, and elements of the data, reveals information about the frequencies within the data.

12-02-2010, 03:31 PM
@Dave: even as we both know quite a lot about FT, we are rarely able to share this knowledge and to detect overlapping. So, what do others think about?

You are completely right and I said it above: whenever you select an interval, and that is, what you do in having an array of numbers, you implicitly accept, that the single elements of this array are sampled in a moment of time and from sample to sample the time changes for the same distance (or, goes by ...) and that the sampling distance is such dense, that the signal doesn't change wildly from sample to sample. This is, where to start.
Further we imply, that the signal is over and over repeated in time and that there is no discontinuity when changing to the next period, that is, the condition we have from sampled point to point is also valid for the transition to the next interval.

Only if this condition is met, ft brings results, which make sense.

And it doesn't matter, if we are talking about DFT, FFT, or whatever FT, because this is just an information about the implementation and realization of the algorithm. In my eyes, it makes no sense to explain how the algorithm runs, but what its good for. Sure, to me I have not answered the question, what is behind butterfly operation, so what it means to calculate the fft of a 2 sample interval.
As long as I do not see this, I am not satisfied. But that is not my problem.
To me it is funny, that the most simple things can be in the focus of science and answering very simple questions can be extremely difficult and time consuming. But can carry a lot of fruits.
Stripping an adhesive tape from graphite can bring you the Nobel Prize. And that is much more difficult than running a collider ;.)

12-02-2010, 03:40 PM
Just a parallel:
If you are driving a swing, you are normally not applying a sinusoidal force, but a "push" and the swing does a DFT: selects just their Eigenfrequenz and so gains amplitude push by push.
That really is an example for a resonance. So we see: if a resonator is exited by a pulse, than this pulse contains a frequency determined by the resonator. And that's real. A signal contains a frequency, if a resonator exchanges energy.

12-03-2010, 06:30 AM
About the importance of FT:

FT is a very powerful tool and applied in many applications. Simple and sophisticated ones.

Let's assume, you have a signal and there is a single frequency added, like 60Hz from the grid. You do the FT, you cut away the 60Hz signal and you do the inverse FT. You will get the original signal, but if there was a 60Hz component in this signal, it's missed now. But that may be acceptable.

But instead of revers FT you could do 3 more forward FT's and get the same result. That is funny. Why is it? what does it mean? What is it good for?

FT has another important property:
You know, that log allows you to bring down multiplication to addition, what simplifies calculations.
The same way, FT brings down a convolution to multiplication.

FT allows to calculate a computer tomographies data to get a picture.

And so on.

12-03-2010, 10:06 AM

Interesting debate. I don't disagree with you but I have to make a few comments:)

It could be argued that numbers, 1, 2, 3 etc don't exist. They are just a mathematical short hand, a mental short cut, a figment of our imaginations. "3" is just short hand for "the successor of 2", "2" is just the short hand for "the successor of "1", "1" is just to represent "a thing" (whatever it may be) as opposed to "0" which is "not to have the thing".

We learn about numbers at an early age and live with them for such a long time that they take on a reality like the nose in front of your face. As my young son said when I was counting ducks in a picture book with him "No dad, that's not three ducks, that's just another duck". I could not fault his logic.

Clearly there are no numbers inside a computer. Just ethereal patterns of transistors switched on or off that we interpret as numbers. Then there are the networks of other transistors that shuffle those patterns around in ways that we like to imagine are operations like addition, subtraction etc.

Still, this view of things rarely gets us to any practical result.

If you have a number of numbers, representing a signal, you can see, there is already something confusing: number of numbers
I hope that "number of numbers" is not too confusing or it's going to make handling arrays a bit tricky. Now, a "complex number of numbers" gets interesting:)

Fourier works on a computer, but mostly you do NOT get what you want. You get what you want, if there is a certain problem from which you know, it can be solved using FT, but this is not usual. Only if the bunch of problems you want to solve consists of problems, where FT is a way to solve, you will be successful.
This true BUT..I could say "Addition works on a computer, but mostly you do NOT get what you want". Point being that addition is defined mathematically to work in ways that a computer mostly cannot handle. Even just thinking about integers, a mathematical "+" is supposed to work for all integers, positive and negative, form zero to plus/minus infinity. A computer being a finite machine can only handle a vanishingly small part of the mathematical concept of addition. That's before we get onto addition of real numbers, vectors, matrices etc.

Of course you have to be aware of your machines limits, it's number range, it's memory capacity etc then you can solve real problems in some limited domain.

But, what you don't say and what most people are not aware of: There is one additional condition: If you have n data points, there is a data point n+1 equal to data point 0, n+2 = 2 and so on, over all the times. Only then a FT makes sense.
I disagree, if I have n data points I have n data points, that it. For sure Bean's Fourier transform algorithm knows nothing more than those n points and it seems to do quite well with out any deep thoughts about infinitely repeating sequences.

If this would not be the case, you were never allowed to talk about "frequencies".
This is clearly not true. If I have n samples and they go like -1, +1, -1, +1, -1.... I can easily see n/2 alternations or cycles of the pattern -1, +1 in my sample set. Given that my sample set represents a length of time I can say I have a frequency.

And in this way it is possible to make sense of the Fourier transform without bringing in the concept of infinity.

The point is: there are no frequencies in between those harmonics.
Yep, think I said that earlier when discussing the 1.0Hz and 1.3Hz issue.

It is just this: lets say, the function is a sinusoid of one period in the interval. Than the fourier transform gives a spectrum 1, 0, 0.... 0, 0 , 1.
Actually this not quite right. Let's say you have a single cycle of a sine wave, amplitude 1.0, represented in 16 samples and you perform the discrete Fourier transform as per the usual definition LINK HERE then the result will be 0.0 in the first output bin (The DC level), 0.5 in the second frequency bin (1 cycle/sampling time) and then 0.5 in the last frequency bin.

Why is that? Well during the transform we are performing a multiply of the input sine by a test frequency sine. When the test frequency is the same as our input we have sin * sin or sin squared. The average of which is 0.5.

So half the energy of our input sine wave has shown up at a frequency of 1 and half the energy has turned up at a frequency of -1 !!!

Anyway, I'm just quibbling here:)

We can not decide, if the result comes from a real waveform or from a wrong selection of the interval.
I agree. And I think it is a very important point. If anyone is going to use a Fourier transform as performed by a computer they must be aware of it's limitations. Otherwise it can lead to all sorts strange looking results and confusion.

12-03-2010, 01:48 PM
OK, this really is a interesting discussion. Arguments going forward and backward, but today I am running out of time. I will try to set up another cmap and try do discuss point for point.
Very interesting, because we sometimes are convergent and sometimes divergent and sometimes we don't know, what we are.

Dave Hein
12-03-2010, 02:08 PM

12-04-2010, 03:02 PM
Two things:

1) I have today decided that I am a dummy.

2) I have just had an insight into the workings of the FAST Fourier transform.

A few decades back I saw my first listing of a program that performed a Fast Fourier Transform. It was in BASIC for the Atari ST. No explanation was given for how in worked. "No problem" I thought as I had understood something of the Fourier transform back in university. How wrong I was.

On reading the listing I found it started off with a weird thing. One by one it took all the elements of the input array, and moved them into an output array, such that the elements index in the output array was formed by taking its index in the input array and reversing all the bits.
So element at %00000000 was written to %00000000, element at %00000001 was written to %10000000, element at &00000010 was written to %01000000. Etc etc.

The rest of the program then operated on that reordered array of samples.

Needless to say I could not fathom why one would want to do such a crazy nuts thing and the rest of the code did not help.

However today I awoke from my afternoon nap with a nagging thought in my mind, a realization, another little step forward in the journey to understand the FFT. Given that I should have seen this decades ago I am now convinced of my dummy status.

Part 1 - Bit Reversal.

Apologies to all FFT gurus for whom this is all very dull. Here we go:

1) Imagine we want to add up 8 numbers S0, S1, S2, S3 .... S7.

2) Say we had some function to do this. Call it SUM(). Then we could write:

result = SUM(S0, S1, S2, S3, S4, S5, S6, S7)

4) But just to be weird we could use SUM on all the even position numbers separately, then all the odd position numbers, then add the two results. So we write:

result = SUM(S0, S2, S4, S6) + SUM(S1, S3, S5, S7)

This would obviously give the same overall summation.

5) To be more weird we could do that again for each of our two SUM(...)s. Taking first the odd positioned S's and then the even positioned S's. We write:

result = SUM(S0, S4) + SUM(S2, S6) + SUM(S1, S5) + SUM(S3, S7)

6) And weirder still we could split it up again:

result = SUM(S0) + SUM(S4) + SUM(S2) + SUM(S6) + SUM(S1) + SUM(S5) + SUM(S3) + SUM(S7)

7) But now our SUM function is not doing any adding at all, it just returns its argument, so we could write:

result = S0 + S4 + S2 + S6 + S1 + S5 + S3 + S7

8) But, coming back to our senses, we realize we have a function to do that, it's called SUM, so we write:

result = SUM(S0, S4, S2, S6, S1, S5, S3, S7)

Which of course is the same as where we started in 2) but with the parameters in a funny order.

Now, let's look at the order of the arguments we started with and the order we ended up with in both decimal and binary:

Initial order Final order
Decimal Binary Binary Decimal
0 000 000 0
1 001 100 4
2 010 010 2
3 011 110 6
4 100 001 1
5 101 101 5
6 110 011 3
7 111 111 7

Notice anything odd?

Well, blow me down, the binary representation of the final order is the mirror image, bit reversal, of the starting order in binary! And so it will be even if we have bigger sets of numbers.

And that is why the famous FFT bit reversal or "butterfly" is so, well, famous. Slaps head very hard.

Part 2 - Why is the important?

If you look at Bean's Fourier Spin code example you will see that it has two nested repeat loops each performing as many repetitions as there are input samples. For each of N output frequencies there is is N multiplications of input samples by sin and cos. That means the execution time goes up by N squared and rapidly becomes very slow for large sample sets.

Let's call the multiplications by sin and cos the "guts" of the algorithm.

For 8 samples there are 8 times 8 "guts" to perform = 64 "guts"
For 1024 samples there are 1024 * 1024 "guts" to perform = 1048576 "guts"

This is not practical. That's a lot of multiplications.

Turns out that for the Fourier transform you can split the job up into halves. Take the transform of each half and then add the results. Using every even sample for the first half and every odd sample for the second half. Much as we did for the SUM process above.

Again, as we did for sum above, you can split the halves into two, take the transform of those and add them.

This can go on recursively splitting the samples in half, transforming them and adding the results.

Why is that important?

Well if you start with 16 samples you have 16 * 16 = 256 "guts" to perform

If you can spilt that into two sets of 8 samples you only have (8 * 8) + (8 * 8) = 128 "guts"

If you can split that into 4 sets of 4 samples it's (4*4) + (4*4) + (4*4) + (4*4) = 64 "guts"

And so on. By splitting the thing up we get rid of the "N squared" execution time problem.

Given that the transform is split by "odds" and "evens" at each level, as in my SUM game above, clearly the same "bit reversal" re-odering of the samples is required.


As you see I'm still miles away from "getting" the FFT or being able to code one. But a little light has come on:)

I'm still not clear on how the job is split into halves or exactly what you have to do to handle the halves correctly.

P.S. Does anyone else have the same problems with this that I do?

12-04-2010, 07:49 PM
Heater, that clicks for me. Great post. Yes, BTW.

12-05-2010, 02:27 AM
@Heater, Thanks that is a great explaination of the "Bit Reversal" process.
What has me stumped is these "Bufferfly" operations. I have no idea what they are. Can you shed some light on them as well ?


Bob Lawrence (VE1RLL)
12-05-2010, 04:41 AM
Butterfly diagram : From Wikipedia, the free encyclopedia


In the context of fast Fourier transform (http://en.wikipedia.org/wiki/Fast_Fourier_transform) algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (http://en.wikipedia.org/wiki/Discrete_Fourier_transform) (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below.[1] (http://en.wikipedia.org/wiki/Butterfly_diagram#cite_note-Oppenheim89-0) The same structure can also be found in the Viterbi algorithm (http://en.wikipedia.org/wiki/Viterbi_algorithm), used for finding the most likely sequence of hidden states.
Most commonly, the term "butterfly" appears in the context of the Cooley–Tukey FFT algorithm (http://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm), which recursively (http://en.wikipedia.org/wiki/Recursion) breaks down a DFT of composite (http://en.wikipedia.org/wiki/Composite_number) size n = rm into r smaller transforms of size m where r is the "radix" of the transform. These smaller DFTs are then combined with size-r butterflies, which themselves are DFTs of size r (performed m times on corresponding outputs of the sub-transforms) pre-multiplied by roots of unity (http://en.wikipedia.org/wiki/Root_of_unity) (known as twiddle factors (http://en.wikipedia.org/wiki/Twiddle_factor)). (This is the "decimation in time" case; one can also perform the steps in reverse, known as "decimation in frequency", where the butterflies come first and are post-multiplied by twiddle factors. See also the Cooley–Tukey FFT (http://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT) article.)"

12-05-2010, 10:21 AM
Thanks Heater, this is surely the most simple way to understand why bit reversal works. And this can show, why symmetry is such an important concept. You could say "bit reversal" and think about how to implement this with an algorithm on a computer. But you also could place a mirror right to the index 0, 1, 2, 3, 5 and you will see in the mirror the new index. But only if you represented the original index as a binary number with two symbols. Now think, that you have a next generation optical or photonic propeller (oppro or phopro) and you have leds or quantum lasers, showing the original index in binary, you mirror this light to photo transistors and this give you the index reordered in real-time. This will be a completely different implementation of the same principle. But, isn't that, what a actual optical instrument like a microscope or the eye of an insect does?
Never stop thinking. Even if you are told to.

12-05-2010, 01:21 PM

What has me stumped is these "Butterfly" operations.... Can you shed some light on them as well ?
Earlier I said something about the "blind leading the blind" but I'm starting to get a glimmer of light re: FFT.

Let' play with that SUM function I described earlier. SUM(S0, S1, S2, S3, S3, S5, S6, S7)

Let's imagine that we actually implemented that SUM function in some language and that our implementation was recursive.

In pseudo code it might look like:

RETURN (FIRST(S)) ; Only one element, nothing to sum, so just return it.
RETURN (r1 + r2)
Here we assume a language that can handle lists of our S type things and we have functions to extract the odd and even position elements from lists.

We might call this function like:

s = LIST(S0, S1, S2, S3, S4, S5, S6, S7)
result = SUM(s)
Now, if we think about how that call proceeds, we see that a SUM over 8 elements calls SUM over 4 elements which calls SUM over 2 elements, which calls SUM over 1 element which just returns the element.

Graphically we can see the calls proceeding as in the attached picture where I have put the lists involved in each call to SUM into boxes and added lines to track the progress of individual elements.

1) If you look at that picture you will see why the term "butterfly" is used. The way the paths of elements criss-cross starts to look like butterflies.

2) It seems that doing this recursive process performs the exact same swapping around of elements as the bit reversal algorithm.

Turns out that the fourier transform can be written recursively much like our SUM function. Except that each call of the function actually returns a list of values. Each of those values is in two parts, a sin part and a cosin part. Also it is required to multiply one of the "sub DFT" by a "fiddle factor".

What this means is that if one were to implement a recursive DFT then no bitreversal step is required. The ways the calls progress just does that for you:)

The bit reversal step is required when implementing the DFT non-recursively as is normally done.

I would refer you to the pseudo code for the DFT on this wiki page:http://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm

You will see it looks much like my SUM function above.

You will also see that the part many people casually refer to as the "fiddle factor" is actually the guts of the thing where the multiplying by sin and cos happens. After all the rest of the function does no calculating at all.

I'm going to have a go at implementing that recursive FT pseudo code as directly as possible in C++ with list objects as it's the closest thing to a Fast Fourier Transform I almost understand at the moment.

Back to peering into the darkness....

Edit: Fixed up the diagram a bit.

12-06-2010, 12:03 PM
Here is an interesting piece of trivia. The Fast Fourier Transform as we know it is attributed to J.W. Cooley (http://en.wikipedia.org/wiki/James_Cooley) and John Tukey (http://en.wikipedia.org/wiki/John_Tukey) as they developed it for computers in the 1960's. Back then Cooley said:

“someday radio tuners will operate with digital processing units. I have heard this suggested with tongue in cheek, but one can speculate.”

Of course this has been done even on the Propeller with Phil Pilgrim's radio receiver on the Propeller project:
http://forums.parallax.com/showthread.php?105674-Hook-an-antenna-to-your-Propeller-and-listen-to-the-radio!-%28New-shortwave-prog&highlight=listen+radio (http://forums.parallax.com/showthread.php?105674-Hook-an-antenna-to-your-Propeller-and-listen-to-the-radio%21-%28New-shortwave-prog&highlight=listen+radio)

12-06-2010, 04:53 PM
Trying to gain a deeper understanding, I am still far from having it. Now, what happens to the original signal when doing this bit-shift?
Suppose, we have a sinusoid with the fundamental frequency. Taking all the even samples first and the odd later, we compress a full oscillation to the first and to the second half, so we double the frequency of the signal! Doing this again with the two halves, we quadruplicate frequency, so if we do this often enough, he have the highest possible frequency!
What happens, when the origin was the highest frequency, so every period only is samples at + and - amplitude, then sorting odd even will bring 1 to the first half of the interval, -1 to the second. Further steps will change nothing, as all values in the first half are equal.
This remark is just to have a picture in the windings of the brain, what's going on. I see Chip in his video on speech generation, how he demonstrates the phase shift. Maybe, he already knows, how to explain FFT.

12-06-2010, 05:39 PM
Since you asked about the Aha moment...
By combining Beans document which to me explains what we are trying to do combined with heaters description of the butterfly which to me explains how to do the FFT in a practical sense combined with the little I know and can remember, I now have had that Aha insight.

I could never really understand the FFT even though I had all the math 40 years ago and have actually programmed the FFT (well OK, actually copied and used) in several projects over the years. I feel now at least I have a chance at understanding, so well done to all but don't stop because there are some tantalizing hints about more to come.

12-06-2010, 06:02 PM

Trying to gain a deeper understanding, I am still far from having it.

Ha! Me to. I swear this thing is going to drive me insane, if it's not too late already.

"Now, what happens to the original signal when doing this bit-shift?"

I'm going to forget the bit reversal thing for a while, I'm just now trying convince myself I understand the recursive DFT as presented in the derivation and pseudo code on wikipedia here http://en.wikipedia.org/wiki/Cooley%E2%80%93Tukey_FFT_algorithm

That approach splits the DFT and calls itself with the two halves recursively. Recursion stops when the is only one element in the set. This means it needs no bit revesal stage that the optimized, "in place" implementations use. As I pointed out with my SUM() game, the way the sample lists are split and passed around does all the reordering anyway.

Suppose, we have a sinusoid with the fundamental frequency..... we have the highest possible frequency!

I can't answer that but already I can see we are looking at things the opposite way around again:)

At the top level a DFT of 8 samples can have a max frequency of 4.
Split to two DFTs of 4 samples each can have a max frequency 2
Spilt to four DFT's of 2 each can have a max frequency of 1
Spilit to 8 DFT's of a single sample each can only have a max frequency of 0 or DC !

Which is exactly what the pdeudo code on wiki does, for a 1 sample DFT it returns a single value, a DC level.

Still banging head against wall....

By the way this is an interesting article: http://www.librow.com/articles/article-10

There he plays a game like my SUM() game but writing out an 8 sample DFT mathematically.

I believe everything we need to understand is in that paper, if you are happy with all that e,i,pi stuff. Just now it's making my brain melt.

12-06-2010, 06:18 PM

Glad that butterfly thing helped a bit, I was afraid it would just confuse more.

I could never really understand the FFT even though I had all the math 40 years ago and have actually programmed the FFT (well OK, actually copied and used) in several projects over the years.

Exactly, same here. So near but so far, very frustrating.

12-06-2010, 09:18 PM
Well as you pointed out, Beans sample program has multiple loops inside loops resulting in lots of operations in the inner loop. And then this little butterfly comes along and magically reduces the inner loop count not by a little but by a significant amount. And Viola the FFT on a slow computer suddenly becomes very doable.

I am amazed that these two young graduates had the insight to see this reordering that would make the FFT practical at a time when "computers" really were slow and microprocessors were not even a sparkle in Chips eye. Well, we really owe them a debt because all the noise our games make and audio and signal processing (and much more) is based on this algorithm.

Maybe you should wear some head gear (to protect the wall of course).

12-06-2010, 09:49 PM
The butterfly does not just magically speed up the inner loop. Cooley and Tukey rearrange the whole process.

There is and interesting historical titbit on the history of FFT in this paper http://www.cs.dartmouth.edu/~rockmore/cse-fft.pdf (http://www.cs.dartmouth.edu/%7Erockmore/cse-fft.pdf)

12-07-2010, 06:36 AM

Now, what happens to the original signal when doing this bit-shift?
I think I have an answer to your question about what happens when the samples are split into ever smaller halves.

As you say it seems that if you take all the odd samples of a sine wave and put them into a half of the list then you appear to have doubled the frequency.

BUT. I'm just looking at that recursive FFT pseudo code on wiki (reproduced below). Here we can see that at each recursive call level the list is chopped up into odds and evens which end up in the first and second halves of a new list. List X goes in, list Y comes out.

Look what happens to the list length N. At each recursive call level it is divided by two.

That N is used in the "exp(...)" gobbledygook, the so called "twiddle factors"

Well, apologies to the math shy, the twiddle factor is actually our sin and cos work where we multiply samples by our "test" sin/cos waves because:

exp(−2πik/N) = cos(2πk/N) - i*sin(2πk/N)

See that dividing N by 2 actually doubles the frequency of the test sin and cos.

Conclusion: Whilst splitting the samples into ever smaller halves we are also doubling the frequency of the sin and cos waveforms we are using as tests for the presence of frequencies. So it all works out:)

Is this a good bit of hand waving or not? Or even correct logic?

Y0,...,N−1 ← ditfft2(X, N, s): DFT of (X0, Xs, X2s, ..., X(N-1)s):
if N = 1 then
Y0 ← X0 trivial size-1 DFT base case
Y0,...,N/2−1 ← ditfft2(X, N/2, 2s) DFT of (X0, X2s, X4s, ...)
YN/2,...,N−1 ← ditfft2(X+s, N/2, 2s) DFT of (Xs, Xs+2s, Xs+4s, ...)
for k = 0 to N/2−1 combine DFTs of two halves into full DFT:
t ← Yk
Yk ← t + exp(−2πi k/N) Yk+N/2
Yk+N/2 ← t − exp(−2πi k/N) Yk+N/2
Here, ditfft2(X,N,1), computes Y=DFT(X) out-of-place by a radix-2 DIT FFT, where N is an integer power of 2 and s=1 is the stride of the input X array. X+s denotes the array starting with Xs.

12-07-2010, 07:10 AM
Notice how the pseudo code for the recursive FFT looks remarkably similar to the recursive SUM() pseudo code I posted above. It divides into odds and even position elements in the same way, Thus stirring up the sample order in the same way as the bit-reversal stage used in non-recursive (in place) implementations.

Of course the FFT calls return a new list rather than a single sum value. Then each element of one half of the list is multiplied by the "exp gobbledygook" (Our sin and cos test frequencies) prior to adding it to the output.

Graham Stabler
12-07-2010, 08:32 AM
This is really interesting, I hope I find some time to get involved. I've done a lot with Fourier in optics (because a lens is a Fourier transformer and light can be represented with an angular spectrum etc etc) and I'm pretty happy with it however the F in the FFT always did confuse me. I'm another guy who just ripped it out of numerical recipes in C or worse, just uses the FFT command in Matlab.


12-07-2010, 11:38 AM
One aspect to consider of the final form of this discussion is the Path of Discovery. The target audience is those that "need to use" FFT/DFT but do not necessarily have a deep math background. But, we notice the "A-ha!" moments tend to come after "side trips" to more complicated references.

The ultimate "simple" explanation might benefit from including the links to these more complicated side references, in a similar order as found in the thread.

I'm guessing new explorers will have to read the final explanation several times, going a bit deeper each pass. So far I've read through several times; its still watering dry ground, but getting better.

This is really good work, I appreciate it and I'm sure other do and more will.

12-07-2010, 12:29 PM
Graham Stabler.

F in the FFT always did confuse me

The "F" in "FFT" is starting to take on a whole other meaning for me:)

OK. I think we have uncovered a dirty little secret here.

What we have is a world full of software engineers, electronics engineers, etc etc, loaded up with appropriate degrees, and diplomas in maths, physics, electronics, computer science etc. Many of them use the Fast Fourier Transform from time to time in all kind of contexts without ever knowing how it works or how to make one. And probably without wanting to admit it, perhaps thinking that everyone else "gets it" and they don't want to appear dumb. The secret is, everyone else does not get it either:)

This is kind of odd given the ubiquity of the FFT.

Sure we all use micro-processors without knowing how to design one, and we may use other maths packages that we could not create. But I think this is a little different. The maths is all out there in the open, source codes implementing FFT can be found in many places. The original Cooley-Tukey paper is one of the most sited ever. The thing is not actually that big.

Let's do an informal survey: Stand up everyone here who fits my description of the typical engineer above. (Heater stands up). Hmm... that's a lot.

The way I see it is that at one end there is the impenetrable maths with it's integrals, exponentials, imaginary numbers, Euler identities, complex roots of unity etc ect. It is possible to understand the FT from that and get through the typical questions they might ask of a physics/electronics undergraduate. But at the same time not have a real feel for what is actually going on and for sure being a long way from the "F" in "FFT". That's me.

On the other hand there is the source code you can find everywhere. This is totally unintelligible unless you know all the little tricks that went into creating it and so does not help in understanding at all.

Then there are all the explanations of how to put the "F" into "FFT" and get from the maths to the code. Most of those seem to be written by people who either don't really want to tell you, haven't really understood the thing just regurgitated what they saw elsewhere, or do really know what they are doing but are so familiar with it all that they can't imagine the state of mind of their dumbfounded audience.

Is understanding the FFT like some kind of mental illness? Those who don't have it can't possibly imagine what it is like to be a sufferer. Those who do have it can't possibly explain what it's like:)

Well, yes, mathematics has often been described as "self analysis", you can't just learn it like a bunch of historical facts. No one can explain it to you, although they might give hints and clues. You have to see it yourself and convince yourself of the truth of things.

OK, I'm putting by head protection back on....there must be a way to explain the "F" in "FFT" this without Euler.

12-07-2010, 12:52 PM

One aspect to consider of the final form of this discussion is the Path of Discovery.

Yep, essential, that's what I'm interested in. Especially as I haven't trodden much of the path yet.

The target audience is those that "need to use" FFT/DFT but do not necessarily have a deep math background.

Yep, but not just use, it's those that have a nagging itch to know what is going on.

But, we notice the "A-ha!" moments tend to come after "side trips" to more complicated references.

Side trips, yes. I'm hoping those side trips can be simple thought experiments based on common knowledge that lead step by step to an A-ha moment. Like my attempt at showing how bit reversal comes about naturally from the SUM() game. That was part of the magic of Beans CORDIC paper, start with some very familiar ideas, explore them a bit, then put them into the final picture.

So far I've read through several times; its still watering dry ground, but getting better.

If you have any sticking points in Beans document you should speak up. Perhaps Bean could smooth the way in a later version. Often one forgets where one got stuck when one gets to the end of the journey.

12-07-2010, 03:01 PM
OK heater, I'm standing with you and agree with your assessment of the "dirty secret" but the good news is we are getting close.

I think at this point it is time for a first draft, a target if you will, that is going to take a few broadsides maybe even get sunk.

I would suggest this obvious layout:
A statement of what we are doing and why
Then hit em with Beans graphs and description.
Include some pseudo code (and perhaps some spin in an appendix to pay homage to the prop)
Next we state explicitly the problem ie the magnitude of the computation in a practical sense.
Now we hit em with Heaters description of the butterfly
put the two together and expand the guts (pseudo code?)
Now introduce recursion and show the connection to the butterfly
a summary complete with pseudo code and Spin and if someone is crazy enough Assembly

Ok guys let the shooting begin.


Just read through the thread and I now feel we need to expand the lead in to include some digital sampling and introduce the digital sampling concept along with enough trig to show how to express the sample points on a graph. this can be further expanded when we get to the "guts" part.

12-08-2010, 04:59 PM
I'm free !

I've done it, reached enlightenment, tasted nirvana, come to the end of a long road. My guru status is assured (In my mind anyway). I am not a dummy, I am a free man. The world is mine. Is that Dr Evil you hear laughing? No, it's me:)

Well, OK, what actually happened is that I managed to:
a) Understand enough of the maths of the DFT to convince myself I know how it works. No, feel how it works.
b) Understand enough of the Cooley-Tukey idea to know why splitting the DFT up into halves repeatedly works, and how to do it.
c) Understand how the pseudo code on wikipedia for the recursive splitting works and that it does actually do what is said of it.
d) Now the biggy, actually implement that pseudo code as a real program in C++.

And it works, it does just as Cooley and Tukey and Gauss said it would! The chain is complete.

Calming down a bit, this may be the end of a long road, it's taken me over twenty years to get this far. But it may also be the start of another long road. However I'm savouring the view from here with a large can of beer.

Ahead is:
1) How to rewrite this in C with out using the short hand provided by the "complex" class of C++.
2) This is recursive and requires separate input and output buffers. How to get rid of the recursion and do all the work "in-place" to save memory.
3) Creating a Spin FFT and then perhaps a PASM FFT.

And worst of all, how to create a paper describing how this all works, I still don't see any escape from the heavy maths for the un-mathematical audience. Which is what this is all about. Perhaps "Fast Fourier Transform For Advanced Dummies" where all the hairy maths is illuminated in minute detail but keeping the scope as narrow as possible

Attached is my effort and a plot showing firstly 1024 samples of a sine wave followed by 512 frequency bins containing a peek at frequency = 16.

By the way someone has already posted an FFT in PASM here a while back, forget who it was, just wondering why he has not piped up on this thread.

12-08-2010, 05:03 PM
All right since I shoved my foot in the door ready or not here is a text document which I think we can use as the launch point. There is much still to be done - have fun

12-08-2010, 05:13 PM
I just saw your post and while I'm not quite at your place yet I can see your campfire and hear the fireworks.
The text document I just posted does not pretend to get too deep into the math . I think my target audience is an intelligent engineer, like all of us here, who because of need or pride just has to go a few steps deeper without climbing all the stairs in the ivory tower. My intention is kinda like I treat my car. I like knowing how it works but grease on my hands is icky and not for me.

12-08-2010, 05:44 PM
Bean, Ray0665 and all,

I made a mistake re: the definition of "butterfly". I have to offer my apologies for any confusion and try to make amends.

Bean posted a question about the "butterfly". I posted an explanation, a revelation for me at the time, of how the magical bit-reversal stage works in the common FFT implementations. I used the SUM() game as an example and referred to it as "the butterfly".

This is not common meaning of "butterfly" in FFT circles.

Turns out that whilst the samples are shuffled around by the bit-reversal stage of a common FFT, producing "butterfly" like patterns, as the actual work progresses the data is being constantly re-shuffled such that it is in the correct order when done. It is this re-shuffling, during the work stage, that is commonly referred to as "the butterfly".

What can I say, sorry, I was was misled in that both shufflings are called butterfly but the later use is far more common in everything I have read since. For example see the flow chart in chapter 12 of the DSP Guide: http://www.dspguide.com/ch12/2.htm

This was brought to my attention by RickB.

12-08-2010, 06:12 PM
Well it looks like I tripped on something on the way to your campfire.
There will be a slight delay while this dummy reattaches the missing parts.
I had not realized that the decomposition had to be undone so this is another deeper aha moment for me. I suppose that if I had gotten some grease on my hands by actually trying to implement the algorithm I would have realized I was missing something. But that is what this process is all about.
so yippie - i think.

Post edit - time passes -
So as a result of my misunderstanding I have realized two important things.
first I realized that when we finish the reordering and have the scrambled array that we have shifted from the time domain to the frequency domain. Recall from Beans diagrams that if we have an Nhz waveform and multiply it by an Nhz waveform the result is N or in other words nothing has to be done to treat it as in the frequency domain.

The second thing I realize with your help of course, is that the reverse reordering also takes place in the frequency domain such that the final array is back in the correct sequence and is the frequency spectrum, and that the butterfly (where the guts live) takes place in the frequency domain.

Your reference to chapter 12 of the Scientists and engineers guide is what did the trick for me.
My head hurts but it feels good. I'll modify the first draft text shortly...

Graham Stabler
12-08-2010, 08:32 PM
I am Sparticus!

12-08-2010, 08:35 PM
Great. Care to elaborate on which foothill you are standing and how you got there? And more importantly what you tripped over on the way.

Don't forget all the slaves were killed after they said that :)

Graham Stabler
12-08-2010, 10:31 PM
To be honest most of the time I just type FFT or FFT2 in to matlab and it takes care of things. I am however familiar with the use of fourier transforms in optics which is quite intuitive and also gives some unique insights in to things. You mentioned that the Youngs slits experiment gives the Fourier transform and this is true, if you code the diffraction of light you sort of end up coding an FT without knowing about it.

A few ideas:

* I think a spectrum analyser is a good way to help introduce the basic concept of spectrum, it is tangible, you stick a sine wave in and a little line pops up. Talking about them is a good substitute.

* Another key part of the Fourier transform is that it is a many to one map. Each point on the spectrum is a weighted summation (sounds better than integral) of all the parts of the original signal. This makes a lot of sense if you think about it because it is the whole signal that determines its frequency not just one point in time/space. This is exactly what you see when you code up diffraction, at a distant plane each point is a weighted sum which takes phase into account.

* The complex number thing is just another way of representing amplitude and phase in a mathematically convenient way, there is really nothing to it. I wish they had not called it complex cos it isn't. It does help however if you understand what a vector is and that it can be divided into two parts in two directions at 90 degrees to each other (I avoided normal/orthogonal). Really complex numbers just keep these two parts together in a special number and that number has some interesting algebraic properties that help out.

I really think that an animation would be the best way to teach these ideas, I'd quite like to animate the Cordic for dummies, that might be even better.


Graham Stabler
12-08-2010, 10:44 PM
Another important point is generally that waves/frequency components can be added to each other without effecting each other.

Drop two stones in a pool, the waves cross each other, there is interference where they cross (peaks where they rise together, troughs where the fall together and points where the rising and falling cancel). However once they have crossed each other's paths they are just as they were. The same is true of the frequency components that make up a signal, they add up to cause an effect but they are unique and don't affect each other.

I think when "dumming down" it is important not to forget some principles like this.


12-09-2010, 06:10 AM
What you described as the behavior of a real system in math is called "linearity". Whenever we describe something in terms of math, we make a mistake and we have to accept this, as we do not have another way to gain a deeper understanding. The problem is: fun mostly comes from nonlinear behavior, like surfing on the beach. That may be one reason, why math is so boring to the most!

We have learned, that a system of equations has a solution, if there are as many equations as unknown and if these equation are linearly independent. So we can see the fourier transform as such an system of equations, where given an array of n numbers -the known, the signal over time- and searched is another array of n numbers - the unknown, the amplitude of a set of frequencies: a DC-level, the fundamental and the harmonics-.
Normally we have a set of equations, and we use for example the gaussian elimination to inverse a matrix, or if we have just two equations, as in primary or secondary school, I don't remember. Here in FT it is not obvious, that we have a set of those equations, because we just say "lets do an FT".
If we realize, what "harmonics" means, we can see, how these equations look like. There are "border conditions"
1. All the sinusoids have a certain amplitude. 2. the square of sinus and co-sinus is constant. Multiplying the amplitude of two functions for all values along the x-Axis and summing up results in zero, when the two functions are not equal and in 1, whenever they are equal.
This behavior is equivalent to a set of n linear independent equations to find n unknown.

12-09-2010, 08:25 AM
I'm wondering if we don't need an "Advanced Engineering Mathematics For Dummies".

Anyone who has not studied such things, will be totally blinded by the normal definition of the DFT with its sigma and "e" and "i" gibberish. If they try to read up on such things they will be diverted and lost by a great wall of mathematics.

Problem is how to explain enough of that stuff so that someone can see that the mathematical definition is equivalent to Beans explanation and how to get from the mathematical definition to some code that looks like Beans Spin program.

Next, bigger, problem is I can't get from the basic DFT definition to what we need for that recursive FFT without all that sigma, e, and i gibberish.

Along the way the Euler identity needs to be introduced.

So for example:

The sigma thing needs to be explained as being a sum notation. It should be shown how a simple "repeat" loop can do that. It should be shown how things can be factored out of the sum in order to remove excess multiplications (Required for FFT decimation)

Then the is "e". I reckon it is not necessary to know why the "e" is special or used, or even what it's value is. Why? Because we are going to get rid of it soon enough thanks to Euler.

Then the is "i". Well books can be written about "i" and all the funky things yo can do with imaginary numbers. BUT for our purposes only a simple explanation of what it is is required and a demonstration of a few simple manipulations. I would probably skip all thoughts of complex numbers as containing phase or being like vectors.

To get from the definition to code you need Euler. Turn the exp into sin and cos. Euler's identity can be stated and discussed without proof.

With those things out of the way we can get the reader form DFT definition to code. In a few steps. We can show that the result is the same as Beans Spin example which was arrived at from a more graphical approach.

But now the biggy. With those simple maths concepts set up we can go with simple steps from DFT to FFT mathematically and from there to code. Perhaps not the ultra optimized, bit-reversing, in-place, no-recursive FFT that you see around a lot but the simple recursive implementation that comes naturally from the maths.

12-09-2010, 08:53 AM
Math for dummies, that it! Maybe we create a lecture "math for mummies". No joke, the Egypt and Greek had a very basic understanding of math and could solve a lot of problems using this knowledge.
A very important step to understand math is, that math is not existing from alone and for itself, but was intended to solve problems. Look for example: logarithm. Logarithm was invented to handle problem "ten times more difficult" than what we can do just now. That means: the difference between 1 and 10 is 9, but between 10 and 100 90, 90 is 10 times 9, so the difference is 9 times more. Facing the problem to come from 10 to 100 is not more frightening than standing at 1 and going to 10. At least, sometimes.
So this problem is solved by logarithm, because log10 say: from 0 to 1 is 1, from 1 to 10 is 1, ... And now, what are the properties of a function, which solves such a problem. This is, where math come from. At least: there is some evidence for that. ;-)

12-09-2010, 09:05 AM
I have heard the story of Galilei, who by discovering the moons of Jupiter came very close to discover "i". Why? He saw little bright dots moving along line crossing Jupiter with a sinusoidal velocity and thought: either there is a hole in Jupiter, so the dots can just tunnel through, or there is a hole in the spheres, so the dots are moons running around Jupiter. As he was sure, what he saw is not imagination (imaginary, "i"), he just said: there is no sphere where the stars are fixed to and so he removed the barriers between earth and heaven. But as he also feared about his pension, he stated, that there was imagination and so the evil came to the world.
OK, that is not really the story. The truth is: The sphere in reality was made quickly, quickly and had a leak. Or two.

So we know, that Galilei also was one of the first tourists to visit Hawaii !

Dave Hein
12-09-2010, 03:25 PM
It might be easier to understand the FFT by first looking at the fast Walsh-Hadamard transform (WHT) http://en.wikipedia.org/wiki/Fast_Hadamard_transform . The basic vectors for the WHT consist of only +1's and -1's. The fast WHT has a similar algorithm as the FFT with butterfly operations and bit reversal. All of the fast WHT bufferfly operations consists of just the sum and difference of a pair of numbers. The transform matrix for a two-point WHT (and any other orthogonal transform, such as the DFT) is as follows:

Y = H * X

where Y and X are 2-point vectors and H is the 2x2 transform matrix

|y0| = | 1 1 | * |x0|
|y1| | 1 -1 | |x1|

which is the same as

y0 = x0 + x1
y1 = x0 - x1

Larger WHT transforms can be recursively broken down into 2-point operations because the WHT is defined as

H2n = |Hn Hn|
|Hn -Hn|

as an example, the 4-point WHT matrix is given by

H4 = |H2 H2| = |1 1 1 1|
|H2 -H2| |1 -1 1 -1|
|1 1 -1 -1|
|1 -1 -1 1|

There is also a scaling factor of 1/sqrt(N), which I have ignored for simplicity.

12-09-2010, 04:20 PM
Regarding the FWHT:

* it is fast and simple [8^)
* the bit reversal is a bit funky...it's actually the bit-reversed Gray code of the index (at least for the version I have)
* a nice little paper relating Walsh functions to the FFT is: http://www.wiseguysynth.com/larry/schematics/walsh/walsh-small.pdf (Note: basically look at the 1st page, after that it's mostly about implementing this using flip-flops!)
* the forward transform is the same as the reverse transform
* instead of scaling each transform by 1/sqrt(N), you can just transform it, transform it back, then divide by N, which since N is a power of 2, means a simple bit shift.


12-09-2010, 08:43 PM
I would like to stop here and and ask a few questions because I feel this thread is bogging down. It is not my intention to side track the discussion and apologize in advance.

So this here thread is titled "Fourier for Dummies" so exactly who are the dummies we are talking about?
Dummies as in high school math? or Dummies with advanced degrees?

Personally I think we are talking about dummies in the middle who
have some pretty good math but aren't interested in the theoretical bottom line.
We have jobs to do and we are very curious and don't want to spend
a lot of time trying to really and truly understand every detail. Most of us know about the Fourier transform and we have seen and used spectrum analizers.
We are software engineers or hardware engineers and serious hobbyists.
We did complex math in school and know what you mean when you say multiply two complex numbers.
What we really want to know is how the thing works and are ok if we gloss over some of the theoretical stuff.

This is the category I fit in and if I'm on the right track lets hear from you and if I am wrong, I again apologize to everyone and will be quiet.

At this point I understand what Bean wrote on multiplying waveforms and Heater has removed the mystery from the reordering.
I now realize that after this reordering we shifted from the time domain to frequency domain and how that happened. So we now have to put things back in the correct order, on the fly, as we multiply by the various sin and cos waves and why simply reversing the reordering process wont work.
Chapter 12 of the Scientists and engineers guide to DSP outlines very nicely how this is accomplished and has some nice diagrams that helped me understand that process.

Along the way I learned a bunch of terms that now make a lot more sense.
So at this point I am truly dangerous, I think I understand the FFT, at least enough to satisfy my curiosity and I am just dangerous enough to actually try it and as you can see talk about it with people who really do know.
What I would like to see now is this description all written up nice in one paper with some nice diagrams.
And thats what this thread was/is all about isn't it?

And just for the record I couldn't have gotten here without this thread so everyone has much to be proud of.


12-10-2010, 07:57 AM

...who are the dummies we are talking about? Dummies as in high school math? or Dummies with advanced degrees?

Good question?

The first dummy to satisfy is me:). I think it was me who first asked bean for an FFD after his superb CORDIC work.

Or, both. I'll elaborate later.

What we really want to know is how the thing works and are ok if we gloss over some of the theoretical stuff.

Yes. For example I'm prepared to not expect an understanding of the Euler Identity. Euler is required to get from the standard definition of DFT to working code with cos() and sin() instead of exp(bla bla). Euler can be stated without proof. Point is that one may not know why the DFT is defined the way it is, but once you have seen it re-written in terms of sin() and cos() it makes intuitive sense when put together with something like Beans exposition.

Heater has removed the mystery from the reordering. I now realize that after this reordering we shifted from the time domain to frequency domain and how that happened

I'm afraid you understand wrongly.

The input to the Fourier Transform is a bunch of signal amplitude samples taken at equal time intervals. A signal in the "Time Domain".

The final output of the FT is a bunch of amplitudes of frequency components at equal frequency spacing. Same signal expressed in the "frequency domain".

In the in-place (non-recursive) implementations of DFT the bit reversal is just a sort of preprocessing prior to the real work being done. The reordered samples are still our time domain signal, just munged up. The bit-reversal just gets the data into an order that it finds it self being used in during the recursive implementations.

Now about the target audience. Many people with the maths skills know all about Fourier, but they can't make the jump to actually writing a Fast FT. Of course life is too short to try and understand everything so they end up just using library code or cutting and pasting FFT code they find. As software engineers we should feel a bit queasy about that, we should be curious to know how algorithms work. And what happens when we want an FFT for say, a Propeller, where there is no existing implementation in Spin or PASM? Carefully reverse engineer a C version, still without knowing how it works, blech.

I think we have at least two major target groups, the curious high school kids (or those whose math is at that level) and then the engineering grads who have done the math but are still blinded by FFT or even the actual meaning of the standard DFT definition. (That was me for over twenty years!)

But then we have a couple of levels of "scope", that is to say the target level of understanding to be imparted by the document. Perhaps more than one document are required. For example:

1) To get the idea that multiplying a signal by a bunch of sinusoids and averaging the result of each is a good way to measure the amount of a frequency in the input signal. Leading up to some code that actually does that. Bean has a good handle on the approach. This is teaching the DFT without needing to know the mathematical DFT definition or even hearing the term DFT. This is an excellent level to shoot for for high school level types. This is "Fourier for Dummies".

2) Starting with a high school level math introduce one to the DFT definition, explain enough of all that sigma, "e", "i" stuff to get the reader to connect the definition with what we have in 1) above. It should be shown that you can get form all that sigma, "e", "i" stuff to the same code as Bean presents.

3) Continuing on from 2) but presenting the idea of "divide and conquer", which is after all a simple rearrangement of the DFT equation. And then build that up into a recursive code FFT implementation, as per my C++ example.

4) Some how explain how an "in-place" non-recursive FFT algorithm comes about. Bit reversal and all. This is still beyond me:)

By the way the FFT is truly remarkable.

For a 16K sample set the DFT, like Beans example code, in C on my PC takes over 25 seconds.

My simple recursive FFT in C++ only takes 50ms. A speed up of about 500 times!

An in-place non-recursive implementation with bit-reversal is another 2 or 3 time faster.

As it is so amazing it is very intriguing to know how it works.

12-10-2010, 08:28 AM
Again a good contribution from Heater.
So I try to compensate that.
As on the behalf of complex numbers, I just wanted to show, that real numbers are not real by pointing to the moons of jupiter, running a sinusoidal path on sky spheres, but we only see one axis and the projection (or shadow) to a imaginative plain. The real movement, as we all believe to know, is more or less a circle around jupiter. Precisely: it a ellipsis. So we always should raise the question: what is it, what we see.
We feel familiar with voltages, so we see a voltage changing value over time as a sinusoidal. This voltage can be generated artificially, by a computer and a DAC, or "naturally", by a capacitor connected to an inductor. Then, as we store energy in this circuit, the energy has to oscillate between C and L, so current and voltage show sinus and cosinus. But the energy is a constant! Now, if energy is constant over time, why do we need time? If something doesn't change, why there is time? Time is needed to have a potential change!!
Only the fact, that something could change, is sufficient to talk about time. OK, that again is very principle. But we all know, that time is passing by, time can not be collected or "saved". We have the impression, that there is just a moment. But that again is only half the truth. And half a truth is just not the truth. So, as we might have seen: even if something is constant, like the energy, some properties of this energetic system may change and they may change according to a very simple differential equation, where the solution is a sinusoidal change over time. Like voltage and current and, as there is the border condition "constant energy", voltage and current sinusoids have to be shifted in time 90°. that is a quarter of one period.
Another way to generate a sinusoidal voltage is an AC generator in a power station. This is done by having a rotation inertia and energy is flowing out with a sinusoidal voltage and the same current IN PHASE, so the power output (or energy flow) is not constant over time. How to heal this? A turbine delivers energy to the generator constant and continuously, but the generator delivers energy discontinuously and not at all constant. Now, this again is quite tricky solved by the nature. As we connect three ac-generators to the turbine, each delivering pulsed energy, and shift the three voltages/currents by a third of a period, the pulses add up to a constant value!
As heater said: a lot of things come together and you can not understand the full picture by just looking to an algorithm.
Lets go on and see, what happens!

12-10-2010, 09:23 AM

I've just cottoned onto what you meant by the Galilei story...
If you see a big planet in the sky,
and if you see it has a little moon moving back and forth from side to side of the planet,
and if you believe these must be both on the same "heavenly sphere",
then it follows that the big planet has a hole in it!

See how ideas get entrenched in our psyche? From our modern perspective we would never even consider the possibility of a hole in a planet.

Still, if planets had such holes then moons could have such orbits...

Never thought about power stations much. As you say a single phase generator into a resistive load would be rather bumpy.

I have no idea about time but consider this:

There is no "time" in the Fourier Transform.

If I have 1024 samples of a sine wave signal in memory, or drawn out on a graph on paper, all those maxima and minima and all the values in between exist there in the same moment. I can count the maxima, maybe there are 16 of them. Perhaps my samples were collected over 1 second, then I could say I have a frequency of 16 cycles per second. But perhaps those samples are not of a time varying signal at all. Perhaps they are the measurements of the height of a brick wall along it's length.

12-10-2010, 09:29 AM
As I was not surprised, that I didn't like to follow the explanation in the DSP-paper linked above, I put another question:
How is it possible, to generate a sinusoidal signal with infinite precision just by saying: let everything be zero and one thing be one?
Because this is, what we do!
In the frequency domain you are creating an array of values, where virtually all are zero, one is one. OK, again have the truth, we have to mirror this array to have a second 1.
Now we are just multiplying, adding and reordering and in the end, we have a perfect sinusoid.
The insight I hope to gain in the end is: I have a feeling I understood this and why the number of periods in the time domain is determined by the shift of the one in the frequency domain.
We could start this way:
We place the one in the middle, so we generate the highest frequency, in time there have to be two values, one for odd and one for even elements.
Now we shift the one by one place and we will have half the frequency.
Further we can try to understand, why, if we do not enter a 1 to one of the twin-arrays, and a zero to the other, but it divide this value into two parts, have the sinusoid phase shifted.

12-10-2010, 09:47 AM
Yes, Galilei was just brilliant. His conclusion was inversely. He saw the dots moving along a line crossing the planets, and as people, even the pope, where not ready to say: there is a hole in the planet, there had to be a hole in the sphere, so the sphere was no longer impenetrable and therefor one could have direct access to heaven. That was a threat to the establishment. And there was another guy at that time. He wrote: I do not publish, what I found, not because I know about Galilei, but some people, not clever enough to understand, what I discovered will take me as witness for there doctrine, and that is not my intention. ;-)

12-10-2010, 09:51 AM
Sure, there is time in the time domain: if you write down a signal as a function of time and you determine frequency, you do not care about the fact, that a time distance doesn't exist, you freeze the signal on a paper or whatever means. But you surely will not say: this value in the frequency domain represents a spacial period of 2.21" on sheet of paper coming from an instrument with a feed rate of 1"/second

12-10-2010, 02:58 PM
I beg to differ about misunderstanding the reordering and for the sake of those reading let me explain.

We start with a time ordered set of samples s0,s1 etc and we reorder them as in the bit reversal. So what did we end up with in terms of the FFT? A scrambled set of samples in time? or something else? The answer depends on how you think of it. (This by the way was another of several aha moments for me so I make a big deal of it rightly or wrongly.)
From chapter 12 of the engineers guide
"The next step in the FFT algorithm is to find the frequency spectra of the 1 point time domain signals. Nothing could be easier; the frequency spectrum of a 1 point signal is equal to itself. This means that nothing is required to do this step. Although there is no work involved, don't forget that each of the 1 point signals is now a frequency spectrum, and not a time domain signal."

This frequency spectra still needs to be multiplied by sin and cos to get the power and phase for each frequency bin and it still needs to get put back into the correct order. Which is a a big piece of this discussion still to be written.

Dave Hein
12-10-2010, 03:31 PM
It sounds like the engineer's guide has an error. Reording the original samples just produces a scrambled set of samples in time. The conversion from the time domain to the frequency domain is done by multiplying times the sine and cosine basis vectors, and computing the sums of the products. The frequency spectrum is the absolute magnitude of the frequency components.

12-10-2010, 04:01 PM
you are making my head spin. and I find it hard to believe the book is wrong, it is possible but until proven otherwise....
but my skills are not good enough to make a strong arguement. I think of it this way...
From Beans graphical, multiplying a 5 volt sin by a 1 volt sine produced a magnitude 5 at the FREQUENCY of the 1 volt sine
When we have the signal scrambled and multiplied it by some frequency we get the magnitude at that frequency only if we guessed right about frequency and phase,
if our guess was wrong we get something else representing magnitude and phase.

Isn't that what we did in theory at the single sample point where the scrambling is complete?
and can not that be thought of as being in the frequency domain?

Dave Hein
12-10-2010, 04:37 PM
The FFT is just a fast implementation of the DFT. The original data is in the time domain, and the final output of the FFT is in the frequency domain. Intermediate stages of the FFT are neither in the time domain or the frequency domain, they are just intermediate results.

12-10-2010, 05:54 PM
Let me start over:
1.) what is the fourier transform good for and why
2.) how is the fourier transform realized
3.) what can the fourier transform be compared to, what we already believe to understand

These are three questions, having only in common, that the are related to fourier transform.

Let me start with the third question:
We know, that a vector is just a pointer into space, having a length and pointing into a certain direction. This direction could be fixed, so all vector show to the same direction, but could have different length. This could even be the case, if the direction is one of many in a plane, one of many in a 3-dimensional space, one of many in an higher, even infinitely dimensional space.
As a vector is manifested by giving the coordinates, and as the vectors length is the square root of the sum of the squares of all coordinates, we can always generate a vector with a length of 1 by division of every component by the length of the original vector.
Having this done, we can create a second vector, perpendicular to the first one, a third perpendicular to those two, and so on until we are having as many vectors of length 1 and perpendicular to all the others, as we have coordinates. This set of vectors we call a Basis.
Now imagine, in three dimensional space we have a coordinate system x, y, z and we shift or rotate this system, then we can give an description of any vector in either base.

The same is true, if we have an higher, lets say 1024 dimensional vector space.

What can be shown is: if you are defining a vector, where the components are a sinusoidal of a fundamental or harmonic frequency, then this vector is perpendicular to every other vector with this condition. Therefore "the harmonics" are nothing but the basis of a vector space.

If now we have a signal in the time domain, and lets say, this signal has 16 samples, we could say: the first sample designates the x-axis, the second the y, the third the z, the forth ?? and so on. And now, as we already have the vectors, representing the sinusoids generated, we only have to determine, how the signals coordinates are in the frequency base. That is all.
So a fourier transform is nothing but a change of the basis coordinate system of a vector. The only problem is to accept, that a vector with 10000000 components is not really different from an vector with 1 component: Because a single vector always has only a length and a direction and if the vector points to the x-axis, his representation is just ( length, 0, 0, 0, ..... -->)

Dave Hein
12-10-2010, 07:51 PM
I like to view the data as an Nx1 matrix (i.e., a vector). The transform is implemented by multiplying an NxN matrix times the data vector to produce a new vector. Each row of the DFT matrix consist of a sine or cosine of increasing frequency. The data can be transformed back to the original domain by multiplying it times the inverse DFT matrix.

One nice property of the DFT, and all orthogonal transforms, is that the inverse matrix is just the transpose of the forward matrix. Therefore, the each column of the inverse DFT matrix consists of a sine or cosine of increasing frequency.

The DFT matrix can be factored into log2(N) sparse NxN matices, where each row and each column only have two non-zero values. Each stage of the FFT is implemented by dong a matrix multiply of one of the sparse matrices. The sparse matrix multiplications can be perfomed on individual pairs of data. This is known as the butterfly operation.

12-11-2010, 10:24 AM
We are discussing Fourier For Dummies right?

So what's all this talk of "vectors", "matrices" and "1024 dimensional space"?

Perhaps that's all true and a nice way to look at it, but my understanding of the FFT, does not require any such concepts. That is not a language for dummies. After all a simple 32 bit integer can be regarded as a vector in a 32 dimensional space or it can be seen as a polynomial of degree 32. These are useful views in some cases.


Sure, there is time in the time domain: if you write down a signal as a function of time and you determine frequency, you do not care about the fact, that a time distance doesn't exist, you freeze the signal on a paper or whatever means.

Sure there is not. After all we are just punching numbers into a calculator, The numbers can represent whatever we like. As I said, the input could be samples of the height of a brick wall along it's length. No time involved there. The FT of that may tell you something about the skill of the bricklayers.


I cannot see the output of the bit reversal stage as being in the time domain. No more than the rearrangement of numbers in my SUM() game, prior to addition, is the sum. Sure the Fourier Transform of a one element sample set is the value of the sample, but all those following multiplications by sin and cos is what gets you the FT of the entire set.

12-11-2010, 11:38 AM
There are two ways to make FFT for dummies. The first one is the one, Cooley–Tukey showed. It's so clever, even computers are able do to it. The other way is to enable man to understand, what is behind the scenes. So FFT for Dummies means to me, not to change man (we all are Dummies), but to improve our abilitiy to understand. Make better use of reason.

"my understanding of the FFT, does not require": as far as I understood, there is no understanding yet.

OK, now about Time- and Frequency-domain. There is no time, there is no frequency. Time and frequency are concepts. Time is a way to express, that something changes. Frequency is a way to express, that something is not changing. Time excludes Frequency and vice versa.

There is a saying: they never come back. Relates to boxers. But, rarely, they come back. So, who is wrong? It it better to have more cogs or more memory? Better to have skills to work or money to spend? So, if there is a statement: there is not time and just punching numbers into a calculator, then I have to argue: obviously the sequence of punching the numbers matters. So the sequence creates or defines time. To the calculator it matters in which order the numbers are punched in, but for example it is not relevant, if they are punched in into the same timely distance. So, as the numbers are stored into an array of memory cells, the index defines time. That is different form sampling the numbers with ticks of a clock.

OK, I agree, it is difficult to imagine a 1024 dimensional space and to see, how vectors are rotated relative to an axis. But decomposing the transformation matrix t * A = f (time domain* Matrix = frequency domain) into 2X2 sparce matrices seems to me a very well suited idea to gain an understanding, even if the used 3-dimensional space is not sufficient to visualize even 2 point time domain signal with complex input values. And all of this without a graphical representation. But I am sure, we are on the right way.

12-11-2010, 01:08 PM

But decomposing the transformation matrix t * A = f (time domain* Matrix = frequency domain) into 2X2 sparce matrices seems to me a very well suited idea to gain an understanding...

Except, I can't for the life of me understand any of that:)

"...all of this without a graphical representation"

I think that's my problem, I pretty much can't understand anything unless I can form a picture of it in my mind. With the picture all of a sudden the equations and derivations start to have meaning rather than just being a bunch of symbols shuffled around on paper according to some set of rules.

12-11-2010, 03:26 PM
I have included ch 12 of the engineers guide as a pdf below. From our discussions so far we have the samples scrambled and we know we need to multiply by sin and cos, and we also know that we need to get the sample unscrambled to get our final power spectra. We need to determine what sample to multiply with what other sample and we need a strategy to cycle through all the samples to accomplish that. And oh yes it would be nice if we understood what we are doing.

I have read through pages 230, 231, and 232 of the attached pdf about six times (Yes it takes awhile to get through several inches of bone, and this is not about which domain we are in, as Dave said we start here and end there and thats all that matters), that, if I understand correctly, we are being shown not only the theory of what we are doing but in a very practical way being shown how to select the samples for multiplication such that we unscramble the samples into the final order. Namely which odd even pairs to use at each level of the butterfly.

Does this clarify anything for anyone else like it does for me?
Can we use it?
Is this stuff right for the "Fourier for Dummies" paper?

The last few pages are about actual programs that for me are now understandable.

12-11-2010, 03:33 PM
Creating a picture in somebodies mind! That is the big challenge. So, if we fail, it's not a fault. It's just the hint to try it a different way. As everybody has an idea about symmetry, it should be a hook to bring into mind an expanded understanding of this concept. What we learn is: we look to a mirror and we see our face to be symmetrical. We plot a dot on a paper and draw a line and are able to draw a mirrored dot. We draw a dot and a second dot and we mirror the first dot at the second. So even if we are not able to use a single point as a mirror physically, we can do this in our mind. And as we are sketching and mirror all the single dots of the sketch on this "mirror point" we realize, that mirroring at a point, we get a rotation. So rotation is a type of symmetry we are not aware of intuitively. So, as everything is relative, it doesn't matter if we make the dummies more clever or the problems simpler. Experience shows: whenever a "dummy" was able to fix a problem, that was simplified, it turns out, that his some is able to solve other, not simplified problems. Or why do we send children to school ? Just to enable the dummies to teach children ;-)

12-11-2010, 03:41 PM
I started to work through this paper few days ago and found, to me it makes no sense. Look just to fig.12-1 and you find, that this paper was not written carefully. In the time domain you have N values, in the frequency domain 2*(N/2+1) values, making N+2 values. That can not happen, there is no way to create something from nothing.

Going on there are other statements, just settings, where you do not the reason, and so you can not get an deeper understanding.

The time domain has 14 elements, the real and imaginary part of frequency domain having each 8.

It is just the wrong way to start with an real dft, because the natural way is to have a complex in and output. Even if this is not intuitively to see.
As there is the problem with the "imaginary" part not solved, that is, still open, why not explain this firstly, and then explain, why, if the imaginary part has the value "0" for all samples, try to find out, how we can use this to reduce computation effort by making use of some symmetries

12-11-2010, 03:54 PM
ErNa did you error? I see (n/2) + 1 not n/(2+1)
Remember My Dear Aunt Sally?

And the diagram is labled 0 - - n-1 regardless of the box count

With Apologies ErNa
This illustrates the difference between practical guys and theoretical guys .
The theorists are upset with outlayers (and for good reasons) and
practical guys are perfectly happy ignoring them as long as they get results.

Fourier for Dummies is for the Practical guys.

12-11-2010, 04:14 PM
I do not know the reason, why they didn't take an "double-even" number of samples like 12, if they didn't like to use a 2^n number. Was it just to show, that there is a limitation of FFT to 2^n samples, but not for DFT? But the box count just reflects what the text say: create 2 times n/2 + 1 frequency points from N time points. That may be seen negligible, but isn't accepted. If there is a hole in a pot, the whole pot will be worthless. (if not made from rare earthes ;-) )

12-11-2010, 06:24 PM
OK, I just found this here: http://nietzsche.holtof.com/Nietzsche_human_all_too_human/sect1_of_first_and_last_things.htm and go on to paragraph 19, this shows why we have to be so precise, at least we should try. The subtle things make the difference between the parts and the unit. Sometimes only the arrangement of the parts. So if you try to analyze something by breaking it apart, you will never find, what makes the thing unique.

12-11-2010, 09:22 PM
Here is a nice look at Tukey the man. www.stat.uchicago.edu/~pmcc/tukey/Tukey.pdf (http://forums.parallax.com/www.stat.uchicago.edu/%7Epmcc/tukey/Tukey.pdf)

Did you know that it was Tukey who first called the binary digit a "bit" and the first to call programs "software"?

Despite which he tried to stay away from computers as much as possible.

12-12-2010, 12:12 PM
There is always a story behind. Very interesting document! Thanks
Coming up to the half, it seems to be fascination, but not glue, what it is about!

12-12-2010, 06:30 PM
Thanks also heater I think the fact that he did all these things and had a disdain (maybe actual contempt) for the math is even more amazing. He certainly was a character. So there may still be hope for us.

I personally am at the point where I have gotten to be where I wanted to be in terms of the FFT. It could NOT have happened without this entire discussion side trips and all, and my head is spinning with all sorts of questions about how this can be presented to someone who does not read the entire thread that makes sense.

Do we follow my original outline? Is order of Ch12 good? Where should Beans pdf be positioned, and the expansion of bitwise reversal by Heater,

Somebody, Anybody!!! I am not sleeping and will explode soon... rescue me from this path before I start wanting to do theoretical math - I'm sick and need help bad....

12-12-2010, 06:42 PM
Moving on to practical matters.

I'm playing with two Fast Fourier Transforms in C. Both are attached.

fourier.c I found on the net, can't remember where. It seems to be a typical text book FFT implementation using a bit-reversal stage prior to getting on with the real work. And importantly doing all the work "in-place". The output ends up in the input buffer. Good for space constrained micro-controllers. I can't for the life of me get a grip on how this style of FFT works.

ditfft2.c is a bit different. I have written it from by simple understanding of how the maths works. The simple steps it takes to get from the DFT definition to a recursive "divide and conquer" algorithm. It uses recursion rather than the bit-reversal step that is normally employed to remove recursion. I has an input and output buffer so wastes memory somewhat.

ditfft2 is my baby, I like it, I understand it. It is easily derived from the C++ version I posted. Which itself is easily derived from the maths. Problem is a) it uses twice as much memory, b) It takes about 50% to 100% longer to execute on a 1024 sample set.

Now actually that's not so bad. I just skimmed an old paper on FFT which quoted another paper as saying "It has been shown that a recursive FFT implementation will be 6 times slower than the non-recursive approach"

How wrong these math heads can be:)

What is more amazing is that if I crank the sample size up to something like one million then my recursive effort starts to be twice as fast as the traditional code!! Not sure at what point they are neck and neck yet.

Dave Hein
12-12-2010, 06:56 PM
You can do an in-place computation if you compute one stage at a time. A recursive algortithm computes parts of each stage until it reaches the final stage, so it needs to use two buffers. Look at the following diagram. If you compute the 2x2 butterfly operations one stage at a time, the results of each stage can be stored back in the orignal buffer. Also note that this form of the FFT has a minimal number of complex muliplies. The butterfly operations use only adds and subtracts with multiplications between the stages. Another way to reduce computations is to precompute all the sines and cosines that will be used in the tranform so you do re-compute the same sine or cosine multiple times.

The webpage at http://cnx.org/content/m12016/latest/ does a good job of describing the decimation-in-time algorithm.


12-12-2010, 07:15 PM

Yep I know, I know. And I've seen that diagram a million times. Problems is the more I look at those things the more my head spins, it has not clicked into place. There is no way I could write a non-recursive FFT with bit-reversal from scratch. I just have not convinced myself of the link from the math to the code. It is the many incomplete and hand waving descriptions of that, as seen in the DSP guide and elsewhere, that have befuddled me for decades.

Still with my new found insights I will not give up.


Somebody, Anybody!!! I am not sleeping and will explode soon... rescue me from this path before I start wanting to do theoretical math - I'm sick and need help bad....
You have my sympathies. I now believe Fourier is a mental illness. It's symptoms include: confusion, withdrawal from the world, feelings of inadequacy (suffers start to refer to themselves as dummies). An intense concentration on increasingly bizarre things, sufferers may start rambling about "imaginary numbers", "negative frequencies" the existence of "harmonics" in all things.

There is no known cure although therapy with other sufferers in self-help groups can alleviate the symptoms. This should be approached with care as sufferers often do not understand each other.

Practically, how we might proceed with "Fourier For Dumies" is a puzzle. I may have shone a light on the bit-reversal but I'm still not seeing how to use it.

As I said before I can't make the link from DFT definition to FFT without the maths so that starts to be out of scope for Dummies.

Where is Bean? He kicked of with a document so the ball is his court:)

12-12-2010, 09:33 PM
Fear not! I am here to help, for I have not delved into the recursive stuff yet only the iterative.

Remember the purpose of doing the bit reversal, I'm sure you do. Well the Butterfly does the reverse putting the samples back in the original order and shows where to do the multiplies adds and subtracts. The two point diagram on the to left of the thing Dave just posted is a single stage butterfly, its inputs is two of the scrambled points. that are combined and multiplied by the sin and cos elements. This gets repeated for all the points on one level and again on the next level until in the end it is unscrambled and you have the power spectrum.

More in a bit I have an urgent matter here....

Dave Hein
12-12-2010, 10:11 PM
The key to understanding the flow diagrams is to understand the operations performed in each 2x2 butterfly operation. The FFT butterfly is shown below. This version is computed with a multiply, an add and a subtract. If the inputs are G and H, then the outputs can be computed in-place as follows:

temp = W * H
H = G - temp
G = G + temp

http://cnx.org/content/m12016/latest/image3.png T

Edit:There were some errors in the original expressions I posted. The expressions are now correct.

12-12-2010, 10:46 PM
Hello again
To continue notice the pattern of the twiddle factors (Thanks Dave). Now here I found Ch 12 page 230 helpful and I don't pretend to be an expert because my focus is in the what and how rather than the theory behind it, but I think it is showing what happens to the frequency domain when we do the bit reversal in the time domain and even though we are not doing the shift and add as shown the bit reversal ends up in that place. Perhaps someone could give some input here?

Also I found the basic program on page 235 and the flow chart on 232 helpful as well.

12-13-2010, 07:20 PM
Here is a demo in spin of the bit reversal algorithm, More to follow....

12-13-2010, 08:20 PM

You should know that Spin has bitwise reverse operators: ‘><’, ‘><=’

So you can write:

reversedIndex := originalIndex >< 10

Which will reverse the first 10 bits of originalIndex to create reversedIndex.

This would shorten and speed up your code:

Something like the following for an array of 1024 samples (10 bit wide index)

PUB BitReversal
repeat A from 0 to 1023
temp := Samples[A]
B := A >< 10
Samples[A] := Samples[B]
Samples[B] := temp

Or something like that:)

12-13-2010, 08:45 PM
Dave Hein,

My head is full of butterflies!

No, it's not a flashback to our psychotropic chemistry experiments of the '70s.

I feel that your little hints, clues and pointers have caused a spark which is about to fire up a huge flash light on the bit-reversing FFT for me.

I've been running over the Douglas L. Jones page on http://cnx.org/content/m12016/latest/#fig2so many times. I'm have a feeling that I'm about to be happy with writing such an FFT from scratch.

One sticking point at the moment is: How does it work to go from a 2 by 2 butterfly with two "twiddle" factors to having just one fiddle factor? As in Jones' "Additional Simplification" section.

In that section he takes of into cuckoo land by referring to "the theory of multi-dimensional index maps" which of course links to an unintelligible page. I've never heard of this theory before, please tell me we don't need it for understanding the problem at hand!

Dave Hein
12-13-2010, 08:58 PM
The butterfly with 2 multipliers is shown below. The value of Wn is exp(-i*2*PI/n), where i = sqrt(-1). Wn can also be written as cos(2*PI/n) - i * sin(2*PI/n). Note, the i in the diagram is intended to be an index, and not the imaginary i. We'll use "k" for the index instead of "i" to reduce the confusion.

The two multipliers in the diagram are Wn raised to the k-th power and Wn raised to the (k + n/2) power. The second multiplier is exp(-i*(k+n/2)*2*PI/n), which is the same as exp(-i*2*PI*k/n) * exp(-i*2*PI*n/(2*n)). The second factor is just exp(-i*PI) = cos(PI) - i * sin(PI) = -1. Therefore, the butterfly can be computed by multiplying the second input value times a single twiddle factor, and then computing the sum and difference.


Dave Hein
12-13-2010, 09:25 PM
The 8-point FFT butterfly diagram would translate to the following psuedo-code:

fft8(complex x[8])
complex w[4]

// Compute W's
w[0] = 1
w[1] = cos(2*PI*1/8) - i * sin(2*PI*1/8)
w[2] = cos(2*PI*2/8) - i * sin(2*PI*2/8)
w[3] = cos(2*PI*3/8) - i * sin(2*PI*3/8)

// Bit Reversal
swap(x[1], x[4])
swap(x[3], x[6])

// First Stage
bufferfly(x[0], x[1], w[0])
bufferfly(x[2], x[3], w[0])
bufferfly(x[4], x[5], w[0])
bufferfly(x[6], x[7], w[0])

// Second Stage
bufferfly(x[0], x[2], w[0])
bufferfly(x[1], x[3], w[2])
bufferfly(x[4], x[6], w[0])
bufferfly(x[5], x[7], w[2])

// Third Stage
bufferfly(x[0], x[4], w[0])
bufferfly(x[1], x[5], w[1])
bufferfly(x[2], x[6], w[2])
bufferfly(x[3], x[7], w[3])

butterfly(complex a, complex b, complex c)
complex temp
temp = b * c
b = a - temp
a = a + temp

swap(complex a, complex b)
complex temp
temp = b
b = a
a = temp


12-13-2010, 10:12 PM
I did know about the bit reversal ops but it didn't net my mind when I was writing the demo. I was really just translating from an earlier program I wrote. Thanks for reminding us all. My goal was to isolate the bit reversal with the intent of including it in the final paper so speed is not a consideration. Next I think an implementation of the three main loops without the butterflies or twiddle factors and finally just the butterfly and twiddle factors

If we could get some more volunterers. That would be great

12-14-2010, 05:10 AM

Brilliant thank you. That searchlight is seriously warming up now.

Strangely enough I now find myself seeing the twiddle factors W as the Nth roots of unity, of which there are N. Seeing them as vectors at equal spaced angles one can visualize that the +N/2 rotates the vector by 180 degrees, which is the same as reversing the vector by negation.

Still your explanation with exp()...is about the last missing piece of the puzzle for me.

As I said before I can't see how to get to the "FFT for Dummies", that is from definition to code, without all that rather undummy maths.

12-14-2010, 06:57 AM

Why do the k values in your second stage Ws go 0 and 2 rather than 0 and 1 ?

// Second Stage
bufferfly(x[0], x[2], w[0])
bufferfly(x[1], x[3], w[2])
bufferfly(x[4], x[6], w[0])
bufferfly(x[5], x[7], w[2])
Instead of:

// Second Stage
bufferfly(x[0], x[2], w[0])
bufferfly(x[1], x[3], w[1])
bufferfly(x[4], x[6], w[0])
bufferfly(x[5], x[7], w[1])
I know that's how appears in the diagram but it does not agree with my
reading of the decimation equation which I summarize as:


Dave Hein
12-14-2010, 01:12 PM
The w's I computed were based on N = 8, so w[k] = W{k,8}. In the second stage of the 8-point FFT, we're doing the last stage of a 4-point FFT. The weights should be W{1,4}, but we're using W{2,8}, which is equivalent.

12-14-2010, 01:20 PM

Oh yes, I was just coming to that conclusion as your post arrived. At each stage down the N gets halved and so....

Which is actually the same as the answer I gave to a question many posts back but I did not see the connection immediately:)

Thanks for confirming my suspicions. Your pseudo code converted to C++ works just fine with the W{2,8}

12-14-2010, 06:27 PM
Updated bit reversal in spin. included use of prop bit reversal OP as suggested by heater.

just to prove I still maintain my Dummy status the thing kept returning an ummodified array!!
After too many attempts the light finally went on.
you need do only the first N/2-1 items if you do them all the second half simply undoes everything.

12-14-2010, 09:09 PM
Cool. No worries, that's a kind of nice mistake to make:)

12-15-2010, 08:21 PM

I don't think that bit reversal using >< is going to work. It will trample on itself. My fault for being hasty when I suggested using ><.

Thing is, with the rev operator in Spin and the rev instruction in PASM it is possible to fill the FFT array in the time decimated order as the data arrives, say from an ADC. There is almost no overhead with using REV. So it saves needing a bit-reversal stage all together.

Dave Hein
12-15-2010, 08:52 PM
The Spin bit reversal will work if you use the same logic as the "classic" routine. Loop from I = 0 to N-1, but only swap if I < J.

12-15-2010, 09:39 PM
Oh my God. I have finally done it.

I've written my very own in-place, non-recursive, bit-reversing, Radix-2 Decimation In Time Fast Fourier Transform.

All without any peeking at existing source codes (honest). My few remaining brain cells are all holding hands in just the right way for me to see the whole thing from the mathematical DFT definition, through Cooley-Tukey decimation, bit-reversal, butterfly simplification (thanks Dave), right down to my code in C++.

All inspired by the lively discussion on this thread. Thank you everybody.

As I suspected if you can fight your way through all the layers of maths gloop and diversions there is actually a beautifully simple idea at the bottom of it all.

My effort is attached for your consideration. I've tried to keep it as straight forward and simple as possible with hopefully meaningful variable names and layout.

Perhaps next up is a C version, a Spin version and if I'm feeling adventurous a PASM version.

I really am a dummy, it's taken me about twenty five years to achieve this level of understanding!

12-15-2010, 11:25 PM
Dude... Well done!!! But, ya gotta do the spin version....
There seems to be an echo in here about the >< op undoing things. Must be the acoustics...
@Dave I would think N/2-1 is faster than if I < J

12-16-2010, 01:38 AM
Oh my God. I have finally done it.

I've written my very own in-place, non-recursive, bit-reversing, Radix-2 Decimation In Time Fast Fourier Transform.
the whole thing from the mathematical DFT definition, through Cooley-Tukey decimation, bit-reversal, butterfly simplification (thanks Dave), right down to my code in C++.
..there is actually a beautifully simple idea at the bottom of it all.
I really am a dummy, it's taken me about twenty five years to achieve this level of understanding!


So, we have success according the criterion stated in the original post, that being either Heater or I claim understanding by the time work stops.

But while we have evidence of understanding (heater's code), we don't have the text of the explanation yet.

I hope the thread continues in that direction, so I have time to re-read all the posts and study the code example.

Incidentally, it took less than a month of heater's hard work since the original post to reach this point. So far the baseline is twenty five years and three weeks till understanding.

Dave Hein
12-16-2010, 02:02 AM
@Dave I would think N/2-1 is faster than if I < J
N/2-1 won't work for N greater than 8. For N=16, you would never swap locations 11 and 13, for example.

Heater, congratulations on the FFT. Now on to convolution, correlation, and many more things that can be done with the FFT.

12-16-2010, 04:22 AM

Dude... Well done!!! But, ya gotta do the spin version....

Thank you, slave driver:)


But while we have evidence of understanding (heater's code), we don't have the text of the explanation yet.

There in lies the problem. Having the idea in mind is a long way from being able to explain it to anyone who does not. Good teachers are hard to find. Plus as I have said before I can't get to the code from the DFT definition without all the maths gloop and that takes any current explanation from me out of scope for "dummies". Unless we start with "Engineering Maths For Dummies" first. It is actually not such a lot to get through.


Heater, congratulations on the FFT. Now on to convolution, correlation, and many more things that can be done with the FFT.

Thank you Dave, and thanks again for the hand holding and tips. That is just what I needed to keep me looking at it until it clicked. As for convolution, correlation etc we will see. I think I'm going to take a rest from anything that feels like thinking until next year:)

Except for that Spin version....

12-16-2010, 01:57 PM
@Dave Thanks for that catch,
I updated the two uploads and hand checked output with 4,8,16,32,128 and 256 seems good now

12-16-2010, 08:51 PM
Attached is my first working effort at converting my home made FFT into Spin.

This takes 1024 samples of a signed 12 bit test signal and produces a nice amplitude spectrum in 512 samples which are printed via FullDuplexSerial. The attached image is a plot of the resulting spectrum given a test signal of two frequencies at maximum amplitude.

Hopefully it is clear how to adapt it to the input from and output to any application.

The complex multiply in the middle of the butterfly is written for clarity rather than speed. The twiddle tables could be reduced in size. Exercises for the reader.

Any questions:)

12-18-2010, 10:07 AM
I've updated my last post with a working version of my 1024 point FFT in Spin.

12-18-2010, 11:17 AM
Here is heater_fft version 0.3.

Optimized the butterfly from 4 multiplies to 3
Reduced "twiddle" tables from words to longs.

12-18-2010, 05:48 PM
After choking on that twiddle factors table...
and since this thread was born from Beans excellent Description of the CORDIC algorithm
and it is something that can be used when building the twiddle factors.
I think we should pay homage to Bean and use a cordic package.

I am uploading the Trig package I cobbled together (after stealing it out of Beau Schwabs Compass module) along with a very simple demo of its use.

12-18-2010, 07:10 PM
This discussion helped stupid me to an deeper understanding of complex numbers and I try to share my "insight":

Complex numbers are written: A + iB and that doesn't mean, that two numbers are added, but the "+" just shows: a complex number has two components, that are not comparable and so they can not be added. The two parts of a complex have nothing in common, therefor they have to be combined to be a unity.
Sounds complicated, but is really simple. 5 + 6 = 11. But 5 beer and 6 coke = 5 beer + coke. As we were used to drink beer, before coke was discovered, we just wrote: give me 5 and we all knew: he meant beer. And now, after coke, we order 5 + 6 coke. And as coke was invented after, they called: give me 4 and imaginary 6.

Now, as there are different brands of coke, beer, and so many hard and soft drinks, we have forgotten, how everything began.

Back to complex numbers: we are used to put down complex numbers in a plane and could write: Ax + By. So we can identify a complex number as a two dimensional vector. And instead of the Cartesian coordinates X,Y we also could use polar coordinates L, phi (length and angle). And from this back to Cartesian, we could write L * cos (phi) x + L * sin (Phi) y, or: L * cos (phi) + i L * sin (Phi)

But there is still one question: what is the difference between a 2 dimensional vector and a complex number?

To me the answer is: a number is just a number, but a number only makes sense, if there are rules to combine numbers, rules to add and multiply. And that makes the difference.

If we add two vectors, we add the components like we add normal numbers: a1x + b1y (first vector) + a2x + b2y (second vector) = (a1+a2)x + (b1+b2)y
Still be aware: a1x + b1y means: we combine two numbers, a1 and b1 to form one unit and a1 and b1 have nothing in common, they show into perpendicular directions. We normally write (a1, b1)
Addition is the same for complex numbers, drawn as vectors, but noted in the form a1 + ib1

The difference comes from multiplication of two vectors:
If we multiply a1x + b1y (first vector) * a2x + b2y we apply the rule, that the x- and y-components are individually multiplied and both results are added.

If we multiply complex numbers, a1 + i b1 and a2 + i b2 we multiply them similar to binominals, (a1 + i b1) * (a2 - i b2) = a1*a2 - ib1*ib2 - a1*ib2 + a2*ib1
There is a little, but important difference: in the second complex number the sign of the imaginary part is flipped!

Why is this important:
Imagine you have to 2-dimensional vectors 0x + 1y and 1x + 0y, so these vectors obviously are orthogonal, because one shows to y, one to x. There scalar product is 0*1 + 1*0 = 0
Now we have to complex numbers: 0x + 1y and 1x + 0y, or as we like to write: 0 + i1 and 1 + 0i.
If we multiply them we have: ( 0 + i1 ) * ( 1 - i0) = 0*1 - i1*i0 - 0*i0 + i1*1 = 0 - 0 - 0 - i*1
And that obviously is not zero! But if we had seen the two numbers as vectors, they were orthogonal and there scalar product must be zero.

So, from this follows: sometimes it makes sense to see complex numbers as vectors in the X/Y-plane, but this is just to have a visualization. To do math it is not sufficient to have the numbers, it needs the rules too!

12-19-2010, 06:09 AM

As usual we have a somewhat opposite views of things:) Although the practical result should be the same.

Complex numbers are written: A + iB and that doesn't mean, that two numbers are added,

Actually I believe it does mean that they are added. For example, in simple algebra I might give you a number in the form:

x = 4 + 3y

That's not much use to you until you have a value for y. Say 2, then you know x is 10. You can't do the actual addition before you know the value of y

So it's the same with "i":

x = 4 + 3i

If we had a value for i we could do the addition. There is no fundamental difference of types like adding "coke" and "beer". In fact there are no types implied here at all.

Only problem is that we have defined "i" to be unknowable, the square root of minus one, therefore we can never do the addition.

Sometimes when writing code I realize I need a function to do something, also that I don't know how to do that thing yet, also that I'm busy thinking about some other part of the program right now. What to do? Easy, pretend I have such a function, give it a name, "foo", write a call to "foo()" at the appropriate point in the code, then get on with what I was doing. I will create foo() later.

The number "i" is the mathematical equivalent. We think of a thing, square root of minus one, and realize we can't evaluate it. We don't give up and stop there, no, we give it a name "i" and drop it into equations like anything else. After all we don't know what it is but we can say things about it "i squared is minus one". Magically the "i" can disappear at the end of our musings and leave as with a solid knowable result.

Loopy Byteloose
12-19-2010, 06:18 AM
Excuse the intrusion, but I happen to be so dumb that I am waiting for the end product in order to learn proper
Fourier. Best wishes to all.

I am wondering if I can use this to determine THD (total harmonic distortion) of an inserted sine wave into a quality audio amp. That seems likely to be a good 'first project'.

12-19-2010, 06:40 AM
Loopy Byteloose,

I am wondering if I can use this to determine THD (total harmonic distortion) of an inserted sine wave into a quality audio amp. That seems likely to be a good 'first project'.
Indeed you can. However it is not so easy. Let's say you want measure down to 0.01% distortion. That's one part in 10000 so already it looks like we need a good 16 bit ADC. Nowadays we might want 24 bits. We need to know that the ADC is distortion free to that level.

When it comes to doing it on the Prop, my code is not up to it. I'm using 32 bit arithmetic on fixed point values. 1.0 is scaled up to 4096 giving only 14 bit precision for the sine and cos tables. The input samples are limited to +/- 4096 so as to prevent overflow.

Of course it would be straight forward to extend this to 64 bit arithmetic or use the float32 object if it has enough accuracy. At a great loss of speed.

12-19-2010, 11:11 AM
OK heater, that gives handle to answer and we should come closer:

You say: x = 4 +3y is not much use, I say: from my view it's of much use. This equation gives you not one value, as you would have it with y=2, but you have a whole set of values and this equation defines a straight line with an inclination of 1/3 and crossing the y-axis at - 4/3 and the x-axis at 4. So this equation give you a infinite number of pairs of numbers.

And it is completely different situation with "i". If you write x = 4 + 3 i, than x is a symbol for the complex number (4, 3). x is no longer just a number, as it would be with an equation.

You say: if we had a value for "i", I say: we have this value for "i". And the value is given by the fact, that i*i = -1.
So it is not correct, if you state: "i" is defined to be unknowable. The square root of -1 is knowable, it is just not a real number.

Further: the "i" never disappears, and there is no magic.

Remember how we are taught to calculate. First the teacher introduced Pi, then "e", then one million, then he told us: don't be afraid, we leave this aside and come back to it later. That is, were all the problems come from: they didn't tell us, that there are complex numbers too, and hermitian vector spaces of infinite dimensionality. ;-)

OK, what I want to say: there is a way top down and bottom up. The whole picture we see in the end. And there is no difference.

Imagine: you are starting with natural numbers, then we introduce negative number, as the inverse operation to + creates numbers, we do not get, when adding natural numbers. That we introduce p/q to have the inversion of multiplication.
Imagine: you could say: it is only allowed to subtract a number from another one, if this number was added before. Then you never would get negative numbers. But what if you added 4 and subtracted 3 and 1? Still there is no negative value. But if you subtracted 4 twice?
You see, the natural way is to "invent" negative numbers or to "discover" negative numbers.
Step by step we fill the gaps between the natural numbers and we discover, that there is even between 0 and 1 an infinite amount of numbers. And if we calculate 1/x we see, that just in the neighborhood of 0 there is a infinite amount of numbers. And that there are numbers like Pi, which only can be expressed exactly by saying: this is the relation of an circles circumference and his diameter.
And "i" is just the square root of -1. But why do we not need the square root of -2? Because this would be a very complicated number. As we know, we do not know the value of sqrt(2), but we only know, that the square of this value is 2. And what, if the situation will be complicated by introducing "i"? We would just by overstressed.
But, rescue is near: we remember,. that -2 = -1 * +2 and that sqrt( -2) =sgrt( -1) * sqrt(+2), and so we divided the problem an conquered!

Do we share now the same point of view?

12-19-2010, 01:07 PM

Do we share now the same point of view?
Of course we do. Sometimes:) I just occasionally find myself walking around an idea, looking at it from different angles, kicking the wheels, and seeing if I want to buy it or not.

You say: x = 4 + 3y is not much use, I say: from my view it's of much use.
Yes, you are right. I just meant that it could not be simplified any more, down to an actual single number. As with my statement about "i", we may not know what "y" is but that does not stop us using it in equations and getting useful statements.

And it is completely different situation with "i".
Not from my current point of view. What if I give you "x = 4 + y" and I don't tell that "y = 3i"? You could work with that up until the point you want to get a single result.

If you write x = 4 + 3i, than x is a symbol for the complex number (4, 3). x is no longer just a number, as it would be with an equation.
Ah, there is the thing. "x" here is not just a normal value, it has two parts. So:

x = 4 + 3i

is actually two equations:

1) Knowable part of x = 4
2) Unknowable part of x = 3

We just can't simplify any more. When we write "x = ..." and say x is complex" then all we are doing is inventing a short hand notation for the two equations. That's all.

You say: if we had a value for "i", I say: we have this value for "i". And the value is given by the fact, that i*i = -1.

How can you say "we have this value for "i"" whilst at the same time saying "i" is the square root of minus one? Which is a value we for sure do not have!

So it is not correct, if you state: "i" is defined to be unknowable. The square root of -1 is knowable, it is just not a real number.
It is not knowable. Please write it down if you have it:) But as we see using the trick of "just name it and continue anyway" it becomes useful.

Further: the "i" never disappears, and there is no magic.
Sure it can disappear. The output of the FFT, for example, is not normally presented directly in (real, imag) form but the magnitudes of each frequency are calculated. POOF! The i's vapourize.

Alternatively one could somehow display the FFT result showing both frequency and phase. POOF! the i's have gone again.

Or one could just present the results a raw real, imag pairs. But then one is just casually dropping the i's, acknowledging that it was just a notational trick anyway and we were working with two sets of equations all along.

Perhaps we should avoid the word "magic" when talking maths, although it does quite often seem quite spooky, full of happy coincidences that just happen to get you where you want to be.

Now here is a thing, from simple algebra we can show that:

(a + ib) * (c + id) = (ac - bd) + i(ad + bc)

I will say there is "magic" here. We have previously stated that we can't combine the so called "real" and "imaginary" parts through addition and simplify our complex numbers.
BUT look what has happened here. The "b" and "d" of the imaginary world are showing up and having an effect in the real world side of the result. And vice-versa. Cool hey?

Here is another thing:

The input to the FFT is in (real,imaginary) pairs. We put the input signal into the real part.
BUT we could put the input signal into the "imaginary" part. The frequency magnitude result would be the same. There is nothing more or less real about either part of the input.

12-19-2010, 01:40 PM
If memory serves me correctly, when I was learning complex numbers i (but j was used instead, go figure) was indeed the square root of minus one but we further defined it as a 90 degree counter clock wise rotation of the X axis. it followed then that higher powers of i produced more rotation and sign changes.

12-19-2010, 02:22 PM
OK, it take some time to argue, and I am a bit short. So a short answer:

"Now here is a thing, from simple algebra we can show that: (a + ib) * (c + id) = (ac - bd) + i(ad + bc)"

Yes, you can show this from simple algebra, the problem is: we are not facing "simple algebra".

Let me explain: the value of 1 is, funny things happen, : 1 And: here value and absolute value are equal. This is not true for -1. Here the value is -1, the absolute value: +1! So, why should we define a property for 1 and -1, which is equal, so that there is no difference between -1 and 1? There is a reason: sometimes it doesn't matter, like when calculation kinetic energy.

Now take this situation: you are here, another person1 is one mile away. The persons are 1 mile apart. There is a third person2, also one mile distant from you. But what is the distance person2-person1. We can not know this, but the distance is not negative and less equal 2 miles. Depending on the situation, we may run into a problem if we do not know more. That is a reason to introduce coordinates. Again: we assume, the persons are in a plane, so we have a 2-dimensional case. Now, as the persons have X and Y coordinates, we are able to determine the distances from the coordinates.

Now, we could introduce coordinates X, Y, because we are used to do so. And we know, that very easily we can expand this to Z and even more, if necessary.
But there is a second way when we know, that we are looking to a plane and there are only 2 coordinates: We call the X- coordinate with no name and the Y-coordinate with the name "i".

Now we place one person in the center, the second somewhere else. Ok, for simplicity: let the second be in a distance of 1, so actually, somewhere on a circle with radius 1.
In the first case we determine the distance by calculation sqrt (x*x + y*y)
Now we are used to express positions in a plane by using distance and angle phi. And from this we know: the x-axis is cos(phi), y: sin(phi), if radius = 1

Let us see this scene from a different point of view:
We are used to see numbers arranged on a number ray. And there is no place for a number sqrt(-1). So we invent the plane of complex numbers. And i now defines the y-coordinate.
Now we expanded our universe of numbers from one-component numbers to two-component numbers.

But as we use the same representation for complex numbers as we use for 2-dimensional vectors, we still have to see them as different things! But to a certain degree there is a relation.

Now we have two ways to define a circle in a plane:
By giving two coordinates X, Y cos(phi), sin(phi)
or by giving a complex number cos(phi) + i sin(phi)

No let us calculate the distance from center to a point on the circle:
D = sqrt (cos(phi)²+ sin(phi)²) for the first case, and the length is 1, as we commonly believe ;-)

or, using "simple algebra"

D² = (cos(phi) + i sin(phi)) * (cos(phi) + i sin(phi)) = cos(phi)² - sin(phi)² + i * 2 * (sin(phi) * cos(phi))
So, this equation is not really simple to solve. What is the distance?

As mathematicians are men and men are lazy, the solved this problem that way:

Knowing, that (a+b)*(a-b) = a² - b² they just said: if we have to multiply two complex numbers, we have to invert the sign of the complex part of the second number!

D² = (cos(phi) + i sin(phi)) * (cos(phi) - i sin(phi)) = cos(phi)² + sin(phi)²

And now, oh miracle, we have the same result as we have it from X/Y point of view!

12-19-2010, 03:49 PM
If memory serves me correctly, when I was learning complex numbers i (but j was used instead, go figure) was indeed the square root of minus one but we further defined it as a 90 degree counter clock wise rotation of the X axis. it followed then that higher powers of i produced more rotation and sign changes.

Mathematicians use i, engineers use j!

12-23-2010, 12:23 PM
So this thread has gone quiet. What is a reasonable "next step"?

a) draft an outline of the explanation document
b) try to define target and scope of the explanation
c) just go through the thread and stitch together the various points from the discussion
d) continue the discussion, there is still not enough to work with

Do any of these choices seem more appropriate than the others? Any other suggestions?

12-23-2010, 02:39 PM
Not Idle or forgotten, my head is still spinning with new found knowledge and holiday preps..

but to answer you directly:
First before anything else we need to define the audience because no matter what we write (as we have already seen here) there will be some that will feel it just doesn't get it without something else or it went too far in such and such an area. This is not a put down on anything or anyone in this thread it just reflects the reality of different levels and personalities.

As for myself I vote for a straight forward explanation of the algorithm with as little theory and math as possible. Beginning with the bit reversal and why it exists continuing through to the butterflys with Beans description and finally a description of the three loops. In other words a straight forward description of the classic algorithm and in the order it is most commonly written.

But this is just my take on how to proceed.

12-24-2010, 01:22 PM

Yes it's quiet. That's because we are all deep in thought. Did you see my description of "Fourier Syndrome"?

I have Fourier syndrome so bad that I had the overwhelming compulsion to recast my Spin FFT program into PASM.

The result is the attached program heater_fft.spin that has both Spin and PASM FFTs in it. Either use BST and the #define syntax to select which version is compiled or manually remove the version you don't want.

Now, here is the amazing part. heater_fft turns out to be almost twice as fast as the FFT_1024.SPIN that is knocking around these threads! 1024 point FFT in 40ms. That's 25 per second and I have not even tried to optimize the PASM yet.

I will attempt to tidy this up a bit, use the Props sin table instead of my own, optimize a few things. But this basically works.

Now to think about that "Fast Fourier Transform For Dummies" before I forget the troubles I had getting to this point.

12-24-2010, 01:38 PM

I vote for a straight forward explanation of the algorithm with as little theory and math as possible

I totally agree but...

Beginning with the bit reversal and why it exists

Straight away there is a problem. The bit reversal exist because of the way we split the DFT sum into two sums of odd and even elements. That already calls for some tolerance of maths notation.

...continuing through to the butterflys with Beans description

There are no butterflies in Beans description. However I would like "FFD" to get the reader from the mathematical definition of the DFT to Beans code with the minimum of pain. I believe that is possible. Only requires getting the reader to accept Euler's formula without proof.

...finally a description of the three loops

I can't for the life of me see how to give an account of those three loops in a meaningful way without going through the basic mathematical steps of splitting up the DFT sum and factoring out the common "twiddle" factors. Then optimizing the two "twiddles" down to one using a trig identity. All the while suffering from "e" and "i" and "pi".

12-24-2010, 03:50 PM
I agree with the above, and my massive gelatinous cranium comes up with this:

The Fourier for Dummies should be divided into sections of increasing compleity and detail, but each section potentially stand alone.
1) the 'kid' explanation with no math
2) the 'girlfriend/boyfriend' explanation that talks about what going on without scaring them off
3a) the programmer's explanation for somebody that just want to use it and get on with life; doesn't particularly care why it works
3b) the engineer's explanation for somebody that might want to adapt it for the hardware?
4) heater's explanation, for those on the brink of real understanding
5) Erna's explanation, that includes the darkest, scariest math secrets and alchemy

Sorry if these classifications are rude, this is the only way to express this thought in the short time available to me; I'll edit it out if need be.

12-24-2010, 11:48 PM

1) the 'kid' explanation with no math

Tricky already. The fundamental idea with Fourier is that if you multiply the amplitudes of a sine wave by those of another sine wave then take the average value of the results over an infinite time you get zero except when both frequencies are exactly the same. Therefore you can use that fact to detect a particular frequency in your signal by multiplying the signal by a sine wave of the frequency of interest and averaging the result. How to get over that idea without the reader being aware of sin and cos at least?

2) the 'girlfriend/boyfriend' explanation that talks about what going on without scaring them off

Impossible for me to imagine. The woman in my life speaks Finnish, Swedish, English and Hebrew. But try to talk anything technical and she's soon changing the subject and/or language.

3a) the programmer's explanation for somebody that just want to use it and get on with life; doesn't particularly care why it works

No. Not here. Isn't there plenty of that on the net already?

3b) the engineer's explanation for somebody that might want to adapt it for the hardware?


4) heater's explanation, for those on the brink of real understanding

I think it is possible. As I said before, from equation to code with barely more maths required than cos and sin.

5) Erna's explanation, that includes the darkest, scariest math secrets and alchemy

The sky is the limit on complexity here:)

12-25-2010, 08:33 AM
Here's an interesting Fourier related coincidence.

In 1965 Gordon Moore predicted that the number of transistors that could be integrated onto a chip would double every 18 months or so. Thus predicting the massive increases in computer performance that we have seen since. (Small transistors means you can have more to do work but also that you can run them faster).

1965 just happens to be the year that Cooley and Tookey published their paper on the Fast Fourier Transform. An algorithm that increased the speed at which one could perform Fourier analysis by a factor of thousands and more.

These two events are kind of related by this paper that points out that for many tasks the increase in performance has been more due to better algorithms being discovered than by brute force increase of computer speed.


12-25-2010, 11:42 AM
Lets go a little back and see, that there is at periodical way to think: http://www.nsf.gov/od/lpa/nsf50/vbush1945.htm. Hopefully, this time someone will follow recommendations. Event if this new report has much more pages to read!

12-25-2010, 07:03 PM

Tricky already. The fundamental idea with Fourier is that if you multiply the amplitudes of a sine wave by those of another sine wave then take the average value of the results over an infinite time you get zero except when both frequencies are exactly the same. Therefore you can use that fact to detect a particular frequency in your signal by multiplying the signal by a sine wave of the frequency of interest and averaging the result. How to get over that idea without the reader being aware of sin and cos at least?

Sorry, I was unclear. I meant no formulas. Everybody can recognize a graph of sin and cos. I was trying to express somehow using the picture.

This link shows the direction I hope to go with all this. "Kids publish study in Royal Society Journal"


So using the list from post #168, it looks like heater and Erna's team will be focusing on items 3 and over; myself and others will be another team focusing on items 3 and under. Are there any primary school educators watching? Please PM me, I would like to talk to you.

01-01-2011, 09:05 PM
Good morning New Year! One question: How was this spectrum displayed?

01-02-2011, 10:07 AM

Do you mean the heater_fft? It just uses FullDuplexSerial to print 512 frequency magnitudes for display with the terminal screen in the Prop Tool or BST.
heater_fft has it's own thread now: http://forums.parallax.com/showthread.php?128292-Heater-s-Fast-Fourier-Transform.

01-02-2011, 10:36 AM
You showed a graph, so did you store the output to a file and made the graphical representation separately or is the graph created in real time? If yes, I could try to use Aribas PropTerminal

01-02-2011, 11:08 AM
Ah the graphs. Actually I used gtkterm as a serial terminal under Linux, saved the data from the Prop to a file and used gnuplot to draw the graph from that.

I didn't want to add any more I/O code than the minimum to show that it works.

01-10-2011, 08:21 AM
Everything you ever wanted to know about the FFT :

"INSIDE the FFT BLACK BOX Serial and Parallel Fast Fourier Transform Algorithms"

By Eleanor Chu (University of Guelph, Ontario, Canada) and Alan George (University of Waterloo, Ontario, Canada).

http://www.google.fi/url?sa=t&source=web&cd=9&ved=0CGIQFjAI&url=http%3A%2F%2Fread.pudn.com%2Fdownloads155%2Fdo c%2Fcomm%2F686281%2F(ebook)%2520CRC%2520Press%2520-%2520Inside%2520The%2520Fft%2520Black%2520Box-%2520Serial%2520And%2520Parallel%2520Fast%2520Four ier%2520Transform%2520Algorith.pdf&rct=j&q=Inside%20the%20FFT%20Black%20Box&ei=GC4nTcyQLMfDswb37IGzAg&usg=AFQjCNExQjMm276-O87m1mDuBRs63MlEtg&sig2=1Dx9-mr3K3juyWH_JDA0FA&cad=rja (http://www.google.fi/url?sa=t&source=web&cd=9&ved=0CGIQFjAI&url=http%3A%2F%2Fread.pudn.com%2Fdownloads155%2Fdo c%2Fcomm%2F686281%2F%28ebook%29%2520CRC%2520Press% 2520-%2520Inside%2520The%2520Fft%2520Black%2520Box-%2520Serial%2520And%2520Parallel%2520Fast%2520Four ier%2520Transform%2520Algorith.pdf&rct=j&q=Inside%20the%20FFT%20Black%20Box&ei=GC4nTcyQLMfDswb37IGzAg&usg=AFQjCNExQjMm276-O87m1mDuBRs63MlEtg&sig2=1Dx9-mr3K3juyWH_JDA0FA&cad=rja)

Mind bending!

01-10-2011, 08:59 AM
OK reading the preface of the "Black Box" book has made me feel better. For years I could not connect the FFT algorithm as seen in example codes with the maths of the Fourier Transform as presented in uni. Now I know why:

However, to many computing and engineering professionals, the large collection of serial and parallel algorithms remain hidden inside the FFT black box because:
(1) coverage of the FFT in computing and engineering text books is usually brief, typically only a few pages are spent on the algorithmic aspects
of the FFT
(2) cryptic and highly variable mathematical and algorithmic notation
(3) limited length of journal articles
(4) important ideas and techniques in designing efficient algorithms are sometimes buried in software or hardware-implemented FFT programs, and not published in the open literature.


Although the FFT, like binary search and quicksort, is commonly used in textbooks to illustrate the divide and conquer paradigm and recursive algorithms, the FFT has a unique feature: the application of the basic FFT algorithm to “naturally ordered” input, if performed “in place,” yields output in “bit-reversed” order. While this feature may be taken for granted by FFT insiders, it is often not addressed in detail in textbooks.

I don't feel some dumb anymore:)

03-17-2011, 02:26 AM

I am thankful for your work, many great explanations. I thank you very much for your ability and willingness to explain the mathematical side of these concepts.

You definitely seem to me (from that I have read of yours) to be more a Mathematician than a scientist (mathematics is a language of absolutes were one exception disproves a rule, while science is the art of the indefinite, were in no theory, "LAW", or axiom is considered as an absolute).
I must say that sometimes your view is way to concrete, in dealing with concepts. There is an importance in considering the more ethereal side of this, Especially in light of the accepted (near axiom) that to observe anything is to change its very nature, or the fact that the result of the DFT if reversed beyond the bounds of the original samples would indeed produce a repeating waveform that matches the initial sample and may or may not match any portion of the waveform after or before the sample period, and the reversal could be extended on to infinity both directions.

Thank you for providing the ethereal side to balance out the absolutes. I do not think this thread would have been comprehensible without both sides. Now if it were possible for one person to provide and understand both sides as the perfectly interwoven area of knowledge and method that they are.

03-17-2011, 07:06 AM

I thank you for you thanks. Whilst we have had a good talk about FFT we still have not created that "FFT For Dummies" that gets the reader from the FFT definition, to an understanding of why it is defined that way, to being able to create the FFT in code. Preferably without too much unnecessary math that fogs the essential idea. I hope to have time to work on that again one day.

Your comment about me being a mathematician made me chuckle. It would have made my maths proffs chuckle to:) I've never considered it one of my strong points, if indeed I have any strong points. After all it has taken many years for the FFT penny to drop for me. As it happens I am a physics graduate, but then 90% of our studies there was maths.

I do agree about the "ethereal side". Assuming I understand what you are getting at that is :) And I do appricieate this business about visualizing extention of waveform out to infinity in both directions.

Thing is you can take almost any part of mathematics and everyone will be looking at it from different directions and take different things away from it. That's how maths progresses I guess. For rmy FFT explanations I like to stick within the finite bounds of the computer we are going to use to calculate it. Those limits are a cold, hard, practical reality. The computer knows nothing of the "infinity" of which you speak.

Upshot is that we might think of the thing extending to infinity and mathematically we might think in terms of continuous functions rather than discreet samples. Integrals instead of sums. But we should be aware of the limitations that occur once we have digitized all this into a machine.

03-17-2011, 02:33 PM

As calling you a mathematician I refer to this preference to confine to confine to the absolute bounds, In this case the limit of sampled waveforms for Fourier operations. I must chuckle at your statement about summation verses integration as even the real world summation approaches integration as the sample rate approaches infinity (I know, I do not wish to write this all out as it would take me many pages).

03-17-2011, 02:55 PM

As calling you a mathematician I refer to this preference to confine to confine to the absolute bounds, In this case the limit of sampled waveforms for Fourier operations.

Ah, I see what you mean.

I must chuckle at your statement about summation verses integration as even the real world summation approaches integration as the sample rate approaches infinity

Well yes, the more samples you have and more bits per sample the closer to reality you get.

But there is more to it than that.

As you said mathematically, or "ethereally" as you put it, the wave forms exist from minus infinity to plus infinity. But with the discrete transform we confined to a box of limited samples over a limited time. So really the thing only works for frequencies such that the waves "fit" in the box a whole number of times. This means that if you are looking at an input frequency of say 60.5Hz, as may be the case in Jays project, then all of a sudden we have a DC offset appearing in the FFT results. Or a whole slew of harmonics can turn up.

At this point you realize you need some windowing function on the input to neaten things up.

My conclusion is that it pays to always keep the reality of the limits of the calculation in mind.

On a related note, I have noticed a few times that programmers throw floating point maths at their problems and are then surprised when their results are blowing up. There is a tendency to be lazy and assume "Oh I have floats, they are super accurate and can handle huge ranges, I don't need to worry how I do my calculations."

02-19-2012, 05:59 PM
I know I came to this conversation late but Fourier transforms just came to my attention. The name does not tell how important it is. The way my brain works I needed a graphic illustration of a DFT first. The math will now make more sense. At this point I want to be able to analyze the 'discrete Fourier transform'. I'm still having a bit of trouble understanding how a given amplitude and frequency can be extracted from a waveform.

02-25-2012, 06:02 PM
I promise this link will spell it out for you. http://www.fourier-series.com/