@Dave: Yes, you are right. And that is exactly, what people often do not realize:
If you select an interval, you select a infinite number of such intervals, one following the other. So the selection of the interval is most critical.
What I've shown was: one period of a base frequency determines the interval. And so all harmonics are fix. Now it is possible to replace a given signal in the interval by a sum of harmonics (phase shifted sinusoids) to be base frequency.
But what if we have a signal combined from two sinusoids 1Hz and 1.3 Hz? We just say: this is a signal added 1Hz and 1.3Hz. But we could never make a simple Fourier transformed spectrum of 2 frequencies.
So, this we should be aware of.
A signal is a signal and in an interval this signal can be replaced by a sum of sinusoids and doing so, as sinusoids are defined - to + infinity, we imply that the signal also is periodic in this range.
In practice, this is never the case and so we generate artifacts, where we do not understand the origin, we try to eliminate them with much effort and the systems become more and more complicated.
Anyway: we are able to cancel noise and echo today, there is ogg and mp3, jpg etc and they are live from ft. If he would have known this, he would have been proud.
But what if we have a signal combined from two sinusoids 1Hz and 1.3 Hz? We just say: this is a signal added 1Hz and 1.3Hz. But we could never make a simple Fourier transformed spectrum of 2 frequencies.
I don't understand this statement ? Why would a 13 second sample window not work ?
Take a look at this starting tutorial and let me know if it makes sense.
Please comment, together we can make this a great tutorial.
Bean
The first paragraph is good. The rest is also good, but there needs to be more foundation of simple concepts between the first paragraph and the following paragraphs. Or the pictures start to look like spaghetti (very scary, orderly spaghetti).
OK, I'm going to make guesses here:
The CORDIC paper started with a really good first sentence. Then it got a little hazy, until the guessing game, where we could find something to latch onto. I ended up come back to the first page to figure it out after thinking about the examples. So I guess most of the fist page could have come AFTER the first two guessing games, maybe later.
The fourier paper has the same structure as the CORDIC paper, but does not have the "guessing game" examples. So there is no familliar-ish concept to align my perspectives. Yes. This is the difference, and maybe the key. At least in my case.
I want to see what heater is coming up with, he seems to understand this perspective.
Man, there's so much here, and its so close. Its like ALMOST being fluent in a new language, but being unable to catch the conversation at parties.
@ErNa - your explanation is obviously spot-on if I could only grasp it, but I can't follow it even when I take off the "new student" blinders. Your train of thought is way over my head at this point. AND I know it shouldn't be. There is some key link missing.
Considering how the rest of civilization has been trying this same task on and off for 200 years, I think were making amazing progress.
A complex waveform may be treated as being composed of a number of sinusoid waveforms. These sinusoids are of various phases, frequencies and amplitudes.
Here we see, that it is very difficult to give a precise statement, even when not writing in a foreign language. various phases, frequencies, amplitude groups these three properties to one level. But there is an important difference:
Frequencies can only be multiples of the fundamental frequency, whereas phase and amplitude may show any value. If not there is a sample mechanism, in this case, phase only can be shifted in increments. And even then: is the analog amplitude digitized, in byte, word, long, floating point, or just a analog value? And: is there a physical behavior, that is not quantized? And does the quantum correspond to the digital representation? OK, we see: a lot of questions and there is no simple solution.
But can we gain understanding of FT independent of implementation?
@Bean: 13 seconds show 13 periods of 1 Hz, but 16.9 of 1.3 Hz. but 10 seconds would do: 10 periods of 1 Hz and 13 of 1.3 Hz. So, to be a devils advocate, we take 1 Hz and Pi Hz. Now I think it's harder to come around, if there is a solution, I sponsor a pie next UPEW!
The transition point from the simple example to the actual math. Managing this transition is key.
All the material given so far great, but its like too much water on dry ground.
Bean's "Guessing game" introduction of the concepts of binary search, and special case where addition can be used to achieve multiplication, and THEN presenting the math that ties it together into something usable; this is the formula I image will lead to success.
Since Fourier is one of the more complicated concepts, maybe it would help to take a step back and try a simpler example as another warm up? (unless somebody now has enough to go on and produces magic).
I'm going to start Scaled Integer for Dummies. That might give us another point of perspective to help with Fourier. I hope we won't have too many plates spinning once.
@Bean: 13 seconds show 13 periods of 1 Hz, but 16.9 of 1.3 Hz. but 10 seconds would do: 10 periods of 1 Hz and 13 of 1.3 Hz. So, to be a devils advocate, we take 1 Hz and Pi Hz. Now I think it's harder to come around, if there is a solution, I sponsor a pie next UPEW!
Oh, right. I see. Thanks for the clarification.
@Braino, Thanks for the feedback. The Fourier pdf is really just the broad approach right now. But I'll take your comments into consideration.
...
So, to be a devils advocate, we take 1 Hz and Pi Hz. Now I think it's harder to come around, if there is a solution, I sponsor a pie next UPEW!
Take a look at the PDF in RickB's post. It describes how signals are handled that have discontinuities at the block boundaries. The key is to multiply the signal times a window function. The window function goes to zero at the block boundaries, and it eliminates the discontinuities. The window function tends to smooth out the frequency spectrum, but it makes it possible to handle real-world signals.
I would suggest that the "Fourier for dummies" document should start with a description of the Fourier series. It should then introduce the basics of sampling theory and the Nyquist frequency, which is the highest frequency that can be represented by a sampled signal. The impulse, or delta function, which is a single non-zero point in the sampled domain is actually a sin(t)/t function in the continuous domain. This is a result of the fact that the signal must be band limited below the Nyquist frequency.
As I said: it is not so easy and there are different ways to reach rome. The trick with the window is often applied and after windowing, when much of the signals information is destroyed, a lot of effort are undertaken to bring the info back.
That is, what I called "artifacts" above.
There fourier concept is not complicated, if only we gain an understanding, what it is used for and what the principles are.
And we should not discuss, how to implement it in the first step, because FFT is so tricky that after understand it, the question stays: how to invent such an algorithm.
If not: can anybody explain, why he is anxious to do so?
I made a model of the Propeller ADC and could share the circuit. And I could share some files with signals, where everybody could test what fft does.
Here is my simple attempt (for simplicity practical details are left for later):
1) You can find the amount of a particular frequency in a waveform by multiplying the waveform by a sine wave and a cosine wave at that frequency and looking at the DC component of each result. Those 2 numbers (the DC value of result of the sine multiply and the the DC value of the cosine multiply) fully describe the content of that frequency in your waveform.
You can physically do this with an analog circuit:
a) Generate a sine wave and a cosine wave at your frequency of interest
b) Make 2 analog multipliers and input your waveform and the sine into one and the cosine into the other.
c) Put a DC low pass filter on each output. The combination of the DC values out of each fully describes the content of your waveform at that frequency. (Details farther down for how)
2) Change the frequency of the sine wave and the cosine wave to see the DC values at other frequencies. Better yet, have a Prop slowly change the frequency of your sine and cosine and have it record the 2 values at each frequency. You now know the frequency content of your waveform over a range of frequencies, ie the frequency spectrum of your signal.
3) This process is reversible. That is from the frequency spectrum of your waveform you can recreate your original waveform in time.
***Anything more than the above is details or practical concerns******
4) If I can describe my waveform mathematically as a function of time then I can do the above using math and the frequency spectrum could be a function itself (of frequency). This is the fourier transform.
5) The math for the above uses imaginary numbers and e^jwt as a convenient (not necessarily easy to understand) way represent the sin and cos components and there relationship.
6) In the digital world the above is done with a Discrete Fourier Transform.
Key Details:
1) How do the 2 numbers fully describe the content at a particular frequency? The content at a particular frequency is a unique sinewave at that frequency. To uniquely describe a sine wave you need the phase and magnitude in addition to the frequency . These can be calc'ed from our 2 numbers we recorded at that frequency. The magnitude will be the sqrt of the sum of the squares of our 2 numbers for that frequency, and the phase is the angle whose tangent equals the cosine magnitude we recorded divided by the sine magnitude. This is why we need to get the DC value for both a sine and cosine multiply so we could get the magnitude and phase.
2) What do we mean by DC? Well technically the DC value is the average value over all time. Forever being a little too long to wait, if our waveform is repetitive we can look over a time that is longer than both the period of our waveform and of our sine wave or better yet a multiple of both. That will give an answer that is close.
Other Details
3) Others details not answered but needed to implement: How fast do I need to sample in the digital world?, How high and low do I need to go in my frequency sampling? How much should I increment my sine and cosine frequency when taking samples? and more.
Here is my simple attempt (for simplicity practical details are left for later):
1) You can find the amount of a particular frequency in a waveform by multiplying the waveform by a sine wave and a cosine wave at that frequency and looking at the DC component of each result. Those 2 numbers (the DC value of result of the sine multiply and the the DC value of the cosine multiply) fully describe the content of that frequency in your waveform.
- Ed
As you said, this is a simple attempt, and I want to point with help of a microscope to a weak point:
There is no information, WHY those 2 numbers are sufficient. And that seems to me basic.
@Dave:
The impulse, or delta function, which is a single non-zero point in the sampled domain is actually a sin(t)/t function in the continuous domain.
This function shows to a certain degree the properties of a delta distribution, but isn't the function. The delta distribution is always positive, sin(t)/t not. That is a significant difference.
So we have to show, what makes fourier series unique and what follows from this.
Understanding FT is not a like watching a movie, and the first scene shows you the end.
The impulse, or delta function, which is a single non-zero point in the sampled domain is actually a sin(t)/t function in the continuous domain.
This function shows to a certain degree the properties of a delta distribution, but isn't the function. The delta distribution is always positive, sin(t)/t not. That is a significant difference.
/QUOTE]
ErNa,
It appears that you misunderstood my statement. In the discrete time domain an impluse function is ..., 0, 0, 0, 1, 0, 0, 0, ... If this signal is sent out through a DAC and filtered with an ideal low-pass filter with a cutoff frequency at half the sampling frequency it will result in a sin(x)/x waveform. Specifically, the waveform will be
sin(PI*t/T)/(PI*t/T), where T is the sampling interval.
Note that this function is zero when t = n*T, where n is an integer, except when n = 0, where the function converges to a value of 1.
The frequency distribution of the discrete delta function is a constant value for all frequencies. The DFT of a delta function will produce coeficients with the same magnitude for all frequencies.
Every "real" function is not the delta pulse, because the delta pulse is not a function. But some functions have properties equal to corresponding properties of delta-dis.
Like: they are "alive" only in a small region, the integral spanning the whole x-axis is very close to the integral over the area of interest.
That real functions are only approximations, you can see that the sin(x)/ starts to oscillate before the 1 is passed to the DAC.
But not reality is an approximation of math, but math can approximately reflect the behavior of real things. ;-)
Well, technically speaking the sin(x)/x exists in the analog domain after the DAC, and not before. A practical low-pass filter has to be causal, and it can't oscillate before it sees the impulse. It would begin to oscillate when it sees the impulse, and would delay the signal depending on the number of poles and zeroes it has. And of course, we can only approximate the sin(x)/x function in practice.
It will be a challenge to develop a good "Fourier for dummies" document that is simple, and accurate. It might be best to introduce a few basic concepts and then provide links to other websites that go into more detail.
If you look to literature you mostly find extremely abstract paper, where you ask, from which start the author origins or just some papers, extracted from those first ones ;-))
"C'est pas une pipe" is painted on a very famous painting and from this we can learn: thinks are never what they look like.
A signal as a function of a variable does never contain any frequency. But you might say: under certain circumstances the signal we have acquainted is the same we get from adding some sinusoids with certain amplitude, a certain phase shift and a fundamental frequency just having one period in that x-range and the same set of parameters for all harmonics.
One question we have to answer is: why only harmonics, could it be better just one frequency and another one? That is, in my opinion, what ffd should deliver (ffd = fourier for dummys)
@Erna - I covered how the 2 numbers fully describe the content in Key Details #1.
My approach was to explain the core concept with as few details as possible. I have found when I am trying to learn something new, all the details can obscure the core concept. I have trouble distinquishing between what is the core and what is a detail.
In fact when I learned Fourier in Engineering school I could do all the math and pass all the tests but I did not realize what the core concept was. That was because I was overwhelmed with all the theorems,details and calculations (or maybe it was all the beer that overwhelmed me).
You are right that for a complete or more detailed understanding there is plenty more to understand. Also the dividing line between core and detail is certainly subjective.
Bean,
Now if we could only get from DFT to FFT in a few simple logical steps that would be amazing.
Yeah, I would like to get to FFT, but I don't understand FFT myself yet. I think that is a good thing because I will be able to explain it better after I get it through my thick head.
I have to read through FFD a few more times. But I have a couple of comments:
1) Perhaps the graphical examples should be using less samples, say only 16, just to make the pictures look simpler and less scary.
3) It should be pointed out that as we have only samples of the input wave form it is only possible to have the fundamental frequency and it's harmonics, up to the nyquist limit, represented by those samples, and therefore present in the output.
3) Whilst you demonstrate that multiplying the input sine wave by a wrong frequency "test harmonic" gives a zero result it should be emphasised that this is true for all test sine waves of the wrong frequency. I think this is a significant point to grasp. That's why this probing technique works.
4) I'm wondering if it is worth mentioning that whilst the algorithm described is very slow when wanting a complete spectrum as an output a simpler version of it can be used to detect a single frequency in a continuous stream of input. Basically a quadrature detector. As in the software defined radio for the Prop.
I notice that when describing how to detect a single frequency by multiplying an input wave by a sine wave and then taking the average of the result you divide by N/2 rather than just N. Without any justification for the 1/2 (well apart from just it works here). A reader may wonder about that.
Then in your example code I see you are only creating a spectrum output with half as many frequency bins as the number of input samples.
Ah, now I see why there is 1/2 used.
Normally when the DFT is described there are as many frequency bins in the output as there were samples in the input.
The second half of the frequency bins actually contains a mirror image of the first half (same values in reverse order). That means half of the spectral energy is in the second half of the output.
In your presentation you are ignoring the second half of the frequency bins and compensating for the energy loss by using the N/2.
What actually is in that top half of the frequency bins? Turns out to be the negative frequencies present in the input.
Is this important? I'm not sure but if anyone were to compare your DFT with the normal equations they would see 1/N not 2/N and perhaps get confused again.
A simple case:
1) Input is a sine wave, amplitude 1.0.
2) Frequency we test against is a sine wave, amplitude 1.0.
3) Multiplying these gives a sine wave, amplitude 0.5, but with a DC offset of 0.5.
4) Taking the average of that yields 0.5.
But our input was amplitude was 1.0, what happened to the other 0.5?
Turns out it is there in the negative frequency region.
What is this negative frequency, how is that possible?
Normally the DFT has as many output frequencies as input samples. BUT given N samples of input there only half as many frequencies it can represented. The Nyquist limit. So what is normally in the other half of the output? It's the other half of the missing energy but for negative frequencies. The negative frequencies are the same as positive, just generated from something that rotates in the opposite direction.
Ahhh !...Fourier gave us so much to think about. Before we even get to using the thing.
What do we do, if we face a paradox? Mostly we find an explanation, no one can understand, but no own dares to admit.
A simple example, far from being complicated: two equal capacitors, one charged, one empty, connected via an ideal switch and wires without losses and if you close the switch, have of the original energy is gone. And now, where? This is discussed over and over and a lot of imagination is used to explain that.
OK. Here the ft related paradox: you have two different frequencies, let say 1 Hz and 1.1 Hz and we want to know, how much of one frequency "found in the other"
I think we have opposing philosophical views here.
On the one hand you make statements like "there is no frequency" and imply that there is some mystical paradox going on.
On the other hand I see a concrete reality of numbers going in and numbers going out of a computer.
OK, you could go further and say "there no numbers", just different states of transistors forming patterns in a silicon chip. Which we humans just happen to interpret as numbers.
One can make such statements but it's not very helpful. After all we did make those silicon chips to behave in a way that we like and that corresponds to concepts that we humans have.
Thing is Fourier algorithms do work on computers, we do get what we want. There is no magic and no paradox in the machine. It's a real machine working in the real world producing real results.
So, my thesis is that the operation of such an algorithm on such a machine is explainable without the mystical objects of higher mathematics, infinity, integral, complex numbers etc.
...paradox: you have two different frequencies, let say 1 Hz and 1.1 Hz and we want to know, how much of one frequency "found in the other"
I'm not sure I understand your meaning of "found in the other". But it looks like a trick question.
Reality is we have a fixed number of samples of some signal as our input data. In fact forget the signal, we just have a list of numbers representing values at equally separated points in time.
Given that list a few moments reflection will show that the only frequencies that can be represented by such a list are harmonics of the lowest frequency of which we can fit a whole cycle into our sample set. Up to the point where every sample has an opposite value, the Nyquist limit. And zero frequency of course.
So if we have 16 samples taken over one second the only possible frequencies that can be represented are: 0, 1, 2, 3, 4, 5, 6, 7, 8 Hertz.
Often the discrete fourier transform is presented as having the same number of output frequencies as input samples. In which case the output list represents amplitudes for frequencies 0, 1, 2, 3, 4, 5, 6, 7, 8, -7, -6, -5, -4, -3, -2, -1 Hertz.
Hmm...I think that's the right order.
"But what about frequencies in between those harmonics?" I hear you say. Repeatedly now. "My input signal may have those".
Sure your analog input may have those "in between" frequencies. But I maintain that the data samples we are working with cannot represent those frequencies.
In the same way that our input samples cannot represent those little wiggles that occur in our signal in between sample points, it cannot represent any continuum of frequencies either.
Indeed if you sample an analog input signal which is a sine wave at some "in between" frequency you will get a result with the energy shared among neighboring frequency bins in the output and probably some DC level as well.
Yes, we have a different understanding and so anyone has. On one hand I make, on the other hand you see: how to compare this? There is no discrepancy, its just different.
You are right: numbers in, numbers out. But what has this to do with a frequency? What has this to do with FT? Numbers have a meaning and there are certain relations and a transform can make obvious, what is hidden.
What I showed above is: even numbers follow rules and are not just here.
If you have a number of numbers, representing a signal, you can see, there is already something confusing: number of numbers
Fourier works on a computer, but mostly you do NOT get what you want. You get what you want, if there is a certain problem from which you know, it can be solved using FT, but this is not usual. Only if the bunch of problems you want to solve consists of problems, where FT is a way to solve, you will be successful.
It is as if you say: the sky is full of stars. Yes. But stars are just spots in a dark and mostly the sky is black. Remove the stars, the sky will stay. Reverse?
Look: you say "Reality is we have a fixed number of samples of some signal as our input data. In fact forget the signal, we just have a list of numbers representing values at equally separated points in time."
But, what you don't say and what most people are not aware of: There is one additional condition: If you have n data points, there is a data point n+1 equal to data point 0, n+2 = 2 and so on, over all the times. Only then a FT makes sense.
If this would not be the case, you were never allowed to talk about "frequencies".
What I want to show is: under certain conditions something is true. If the conditions are slightly different, it is no longer true AND not false. Its nearly true. So we do not longer have a binary logic.
To understand fourier transform really, you have to know some rules about it. And one is: ft can express a signal represented by an array of numbers, where this numbers are seen as EQUIDISTANT samples of a function of an variable. The interval defined is one period of an infinite number of periods and only then you can express this interval as a sum of phase shifted sinusoids where there is a fundamental frequency covering 2Pi in said interval and n-1 harmonics.
The point is: there are no frequencies in between those harmonics. It is just this: lets say, the function is a sinusoid of one period in the interval. Than the fourier transform gives a spectrum 1, 0, 0.... 0, 0 , 1.
That is very simple, because there is only one 1 and all the rest is zero. But last element is also 1. Because you know, that the origin was a sinusoid, you know the last element is an "alias".
If now you change the input signal only for one sample, the spectrum will by <> 0 for all frequencies.
We can not decide, if the result comes from a real waveform or from a wrong selection of the interval. That is what I wanted to point to!
There is no frequency means: we define the frequency. We, the observer define, what "is".
We do not know, if the propeller really has 8 cores or if there isn't a single, very fast core emulation 8. Even if you open the housing, you see 8 similar patterns, but this could be mimicry. But there is evidence, that there are 8 cogs: the power, the prize and the spectrum emitted when the chip is running ;-)
The reason you can't have frequencies that are not multiples of the fundamental frequency is that the signal block repeats over and over to +/- infinity. This assumption is true for both the Fourier series and the discrete Fourier transform. Under this assumption, a non-multiple sinusoid is actually a more complex signal made up of multiple frequency components with a discontinuity at the block boundary.
You could use a block size that matches both the 1 Hz and 1.1 Hz sinusoids. However, in practice the block is multiplied times a window function to remove this constraint.
The spectrum does include N frequency values. For real-valued data the spectrum is symmetric around the Nyquist frequency (one-half the sampling frequency). In general, this is not the case for complex-valued data. Of course, in the real world we normally deal only with real-valued data.
Frequency = unique series of amplitudes over time. Frankly, I think the concept of resonance needs to be applied here. Some resonance between the math construct, and elements of the data, reveals information about the frequencies within the data.
@Dave: even as we both know quite a lot about FT, we are rarely able to share this knowledge and to detect overlapping. So, what do others think about?
You are completely right and I said it above: whenever you select an interval, and that is, what you do in having an array of numbers, you implicitly accept, that the single elements of this array are sampled in a moment of time and from sample to sample the time changes for the same distance (or, goes by ...) and that the sampling distance is such dense, that the signal doesn't change wildly from sample to sample. This is, where to start.
Further we imply, that the signal is over and over repeated in time and that there is no discontinuity when changing to the next period, that is, the condition we have from sampled point to point is also valid for the transition to the next interval.
Only if this condition is met, ft brings results, which make sense.
And it doesn't matter, if we are talking about DFT, FFT, or whatever FT, because this is just an information about the implementation and realization of the algorithm. In my eyes, it makes no sense to explain how the algorithm runs, but what its good for. Sure, to me I have not answered the question, what is behind butterfly operation, so what it means to calculate the fft of a 2 sample interval.
As long as I do not see this, I am not satisfied. But that is not my problem.
To me it is funny, that the most simple things can be in the focus of science and answering very simple questions can be extremely difficult and time consuming. But can carry a lot of fruits.
Stripping an adhesive tape from graphite can bring you the Nobel Prize. And that is much more difficult than running a collider ;.)
Just a parallel:
If you are driving a swing, you are normally not applying a sinusoidal force, but a "push" and the swing does a DFT: selects just their Eigenfrequenz and so gains amplitude push by push.
That really is an example for a resonance. So we see: if a resonator is exited by a pulse, than this pulse contains a frequency determined by the resonator. And that's real. A signal contains a frequency, if a resonator exchanges energy.
FT is a very powerful tool and applied in many applications. Simple and sophisticated ones.
Let's assume, you have a signal and there is a single frequency added, like 60Hz from the grid. You do the FT, you cut away the 60Hz signal and you do the inverse FT. You will get the original signal, but if there was a 60Hz component in this signal, it's missed now. But that may be acceptable.
But instead of revers FT you could do 3 more forward FT's and get the same result. That is funny. Why is it? what does it mean? What is it good for?
FT has another important property:
You know, that log allows you to bring down multiplication to addition, what simplifies calculations.
The same way, FT brings down a convolution to multiplication.
FT allows to calculate a computer tomographies data to get a picture.
Comments
http://www.syscompdesign.com/AppNotes/fourier-transform.pdf
There are a lot of other good tutorials here.
If you select an interval, you select a infinite number of such intervals, one following the other. So the selection of the interval is most critical.
What I've shown was: one period of a base frequency determines the interval. And so all harmonics are fix. Now it is possible to replace a given signal in the interval by a sum of harmonics (phase shifted sinusoids) to be base frequency.
But what if we have a signal combined from two sinusoids 1Hz and 1.3 Hz? We just say: this is a signal added 1Hz and 1.3Hz. But we could never make a simple Fourier transformed spectrum of 2 frequencies.
So, this we should be aware of.
A signal is a signal and in an interval this signal can be replaced by a sum of sinusoids and doing so, as sinusoids are defined - to + infinity, we imply that the signal also is periodic in this range.
In practice, this is never the case and so we generate artifacts, where we do not understand the origin, we try to eliminate them with much effort and the systems become more and more complicated.
Anyway: we are able to cancel noise and echo today, there is ogg and mp3, jpg etc and they are live from ft. If he would have known this, he would have been proud.
I don't understand this statement ? Why would a 13 second sample window not work ?
Bean
The first paragraph is good. The rest is also good, but there needs to be more foundation of simple concepts between the first paragraph and the following paragraphs. Or the pictures start to look like spaghetti (very scary, orderly spaghetti).
OK, I'm going to make guesses here:
The CORDIC paper started with a really good first sentence. Then it got a little hazy, until the guessing game, where we could find something to latch onto. I ended up come back to the first page to figure it out after thinking about the examples. So I guess most of the fist page could have come AFTER the first two guessing games, maybe later.
The fourier paper has the same structure as the CORDIC paper, but does not have the "guessing game" examples. So there is no familliar-ish concept to align my perspectives. Yes. This is the difference, and maybe the key. At least in my case.
I want to see what heater is coming up with, he seems to understand this perspective.
Man, there's so much here, and its so close. Its like ALMOST being fluent in a new language, but being unable to catch the conversation at parties.
@ErNa - your explanation is obviously spot-on if I could only grasp it, but I can't follow it even when I take off the "new student" blinders. Your train of thought is way over my head at this point. AND I know it shouldn't be. There is some key link missing.
Considering how the rest of civilization has been trying this same task on and off for 200 years, I think were making amazing progress.
Frequencies can only be multiples of the fundamental frequency, whereas phase and amplitude may show any value. If not there is a sample mechanism, in this case, phase only can be shifted in increments. And even then: is the analog amplitude digitized, in byte, word, long, floating point, or just a analog value? And: is there a physical behavior, that is not quantized? And does the quantum correspond to the digital representation? OK, we see: a lot of questions and there is no simple solution.
But can we gain understanding of FT independent of implementation?
@Bean: 13 seconds show 13 periods of 1 Hz, but 16.9 of 1.3 Hz. but 10 seconds would do: 10 periods of 1 Hz and 13 of 1.3 Hz. So, to be a devils advocate, we take 1 Hz and Pi Hz. Now I think it's harder to come around, if there is a solution, I sponsor a pie next UPEW!
The transition point from the simple example to the actual math. Managing this transition is key.
All the material given so far great, but its like too much water on dry ground.
Bean's "Guessing game" introduction of the concepts of binary search, and special case where addition can be used to achieve multiplication, and THEN presenting the math that ties it together into something usable; this is the formula I image will lead to success.
Since Fourier is one of the more complicated concepts, maybe it would help to take a step back and try a simpler example as another warm up? (unless somebody now has enough to go on and produces magic).
I'm going to start Scaled Integer for Dummies. That might give us another point of perspective to help with Fourier. I hope we won't have too many plates spinning once.
http://forums.parallax.com/showthread.php?p=957791#post957791
Oh, right. I see. Thanks for the clarification.
@Braino, Thanks for the feedback. The Fourier pdf is really just the broad approach right now. But I'll take your comments into consideration.
Bean
I would suggest that the "Fourier for dummies" document should start with a description of the Fourier series. It should then introduce the basics of sampling theory and the Nyquist frequency, which is the highest frequency that can be represented by a sampled signal. The impulse, or delta function, which is a single non-zero point in the sampled domain is actually a sin(t)/t function in the continuous domain. This is a result of the fact that the signal must be band limited below the Nyquist frequency.
That is, what I called "artifacts" above.
There fourier concept is not complicated, if only we gain an understanding, what it is used for and what the principles are.
And we should not discuss, how to implement it in the first step, because FFT is so tricky that after understand it, the question stays: how to invent such an algorithm.
OK, just a question: is anybody here, who downloaded LTSpice from http://www.linear.com/designtools/software/?
If not: can anybody explain, why he is anxious to do so?
I made a model of the Propeller ADC and could share the circuit. And I could share some files with signals, where everybody could test what fft does.
1) You can find the amount of a particular frequency in a waveform by multiplying the waveform by a sine wave and a cosine wave at that frequency and looking at the DC component of each result. Those 2 numbers (the DC value of result of the sine multiply and the the DC value of the cosine multiply) fully describe the content of that frequency in your waveform.
You can physically do this with an analog circuit:
a) Generate a sine wave and a cosine wave at your frequency of interest
b) Make 2 analog multipliers and input your waveform and the sine into one and the cosine into the other.
c) Put a DC low pass filter on each output. The combination of the DC values out of each fully describes the content of your waveform at that frequency. (Details farther down for how)
2) Change the frequency of the sine wave and the cosine wave to see the DC values at other frequencies. Better yet, have a Prop slowly change the frequency of your sine and cosine and have it record the 2 values at each frequency. You now know the frequency content of your waveform over a range of frequencies, ie the frequency spectrum of your signal.
3) This process is reversible. That is from the frequency spectrum of your waveform you can recreate your original waveform in time.
***Anything more than the above is details or practical concerns******
4) If I can describe my waveform mathematically as a function of time then I can do the above using math and the frequency spectrum could be a function itself (of frequency). This is the fourier transform.
5) The math for the above uses imaginary numbers and e^jwt as a convenient (not necessarily easy to understand) way represent the sin and cos components and there relationship.
6) In the digital world the above is done with a Discrete Fourier Transform.
Key Details:
1) How do the 2 numbers fully describe the content at a particular frequency? The content at a particular frequency is a unique sinewave at that frequency. To uniquely describe a sine wave you need the phase and magnitude in addition to the frequency . These can be calc'ed from our 2 numbers we recorded at that frequency. The magnitude will be the sqrt of the sum of the squares of our 2 numbers for that frequency, and the phase is the angle whose tangent equals the cosine magnitude we recorded divided by the sine magnitude. This is why we need to get the DC value for both a sine and cosine multiply so we could get the magnitude and phase.
2) What do we mean by DC? Well technically the DC value is the average value over all time. Forever being a little too long to wait, if our waveform is repetitive we can look over a time that is longer than both the period of our waveform and of our sine wave or better yet a multiple of both. That will give an answer that is close.
Other Details
3) Others details not answered but needed to implement: How fast do I need to sample in the digital world?, How high and low do I need to go in my frequency sampling? How much should I increment my sine and cosine frequency when taking samples? and more.
- Ed
There is no information, WHY those 2 numbers are sufficient. And that seems to me basic.
@Dave:
This function shows to a certain degree the properties of a delta distribution, but isn't the function. The delta distribution is always positive, sin(t)/t not. That is a significant difference.
So we have to show, what makes fourier series unique and what follows from this.
Understanding FT is not a like watching a movie, and the first scene shows you the end.
Every "real" function is not the delta pulse, because the delta pulse is not a function. But some functions have properties equal to corresponding properties of delta-dis.
Like: they are "alive" only in a small region, the integral spanning the whole x-axis is very close to the integral over the area of interest.
That real functions are only approximations, you can see that the sin(x)/ starts to oscillate before the 1 is passed to the DAC.
But not reality is an approximation of math, but math can approximately reflect the behavior of real things. ;-)
It will be a challenge to develop a good "Fourier for dummies" document that is simple, and accurate. It might be best to introduce a few basic concepts and then provide links to other websites that go into more detail.
If you look to literature you mostly find extremely abstract paper, where you ask, from which start the author origins or just some papers, extracted from those first ones ;-))
"C'est pas une pipe" is painted on a very famous painting and from this we can learn: thinks are never what they look like.
A signal as a function of a variable does never contain any frequency. But you might say: under certain circumstances the signal we have acquainted is the same we get from adding some sinusoids with certain amplitude, a certain phase shift and a fundamental frequency just having one period in that x-range and the same set of parameters for all harmonics.
One question we have to answer is: why only harmonics, could it be better just one frequency and another one? That is, in my opinion, what ffd should deliver (ffd = fourier for dummys)
It works, but it's not the fastest in the world (it's all spin).
I need to document it a little more for inclusion in the tutorial.
P.S. If you happen to have a harmonica it works great as a test sound source.
Bean
My approach was to explain the core concept with as few details as possible. I have found when I am trying to learn something new, all the details can obscure the core concept. I have trouble distinquishing between what is the core and what is a detail.
In fact when I learned Fourier in Engineering school I could do all the math and pass all the tests but I did not realize what the core concept was. That was because I was overwhelmed with all the theorems,details and calculations (or maybe it was all the beer that overwhelmed me).
You are right that for a complete or more detailed understanding there is plenty more to understand. Also the dividing line between core and detail is certainly subjective.
- Ed
Excellent stab at FFD. Just the approach I had in mind:)
Now that you have none that and what with the PDF linked to by RickB I will now bin my effort which was getting to wordy already anyway.
Now if we could only get from DFT to FFT in a few simple logical steps that would be amazing.
Bean
I have to read through FFD a few more times. But I have a couple of comments:
1) Perhaps the graphical examples should be using less samples, say only 16, just to make the pictures look simpler and less scary.
3) It should be pointed out that as we have only samples of the input wave form it is only possible to have the fundamental frequency and it's harmonics, up to the nyquist limit, represented by those samples, and therefore present in the output.
3) Whilst you demonstrate that multiplying the input sine wave by a wrong frequency "test harmonic" gives a zero result it should be emphasised that this is true for all test sine waves of the wrong frequency. I think this is a significant point to grasp. That's why this probing technique works.
4) I'm wondering if it is worth mentioning that whilst the algorithm described is very slow when wanting a complete spectrum as an output a simpler version of it can be used to detect a single frequency in a continuous stream of input. Basically a quadrature detector. As in the software defined radio for the Prop.
I also don't get the FFT very well yet.
I notice that when describing how to detect a single frequency by multiplying an input wave by a sine wave and then taking the average of the result you divide by N/2 rather than just N. Without any justification for the 1/2 (well apart from just it works here). A reader may wonder about that.
Then in your example code I see you are only creating a spectrum output with half as many frequency bins as the number of input samples.
Ah, now I see why there is 1/2 used.
Normally when the DFT is described there are as many frequency bins in the output as there were samples in the input.
The second half of the frequency bins actually contains a mirror image of the first half (same values in reverse order). That means half of the spectral energy is in the second half of the output.
In your presentation you are ignoring the second half of the frequency bins and compensating for the energy loss by using the N/2.
What actually is in that top half of the frequency bins? Turns out to be the negative frequencies present in the input.
Is this important? I'm not sure but if anyone were to compare your DFT with the normal equations they would see 1/N not 2/N and perhaps get confused again.
A simple case:
1) Input is a sine wave, amplitude 1.0.
2) Frequency we test against is a sine wave, amplitude 1.0.
3) Multiplying these gives a sine wave, amplitude 0.5, but with a DC offset of 0.5.
4) Taking the average of that yields 0.5.
But our input was amplitude was 1.0, what happened to the other 0.5?
Turns out it is there in the negative frequency region.
What is this negative frequency, how is that possible?
Normally the DFT has as many output frequencies as input samples. BUT given N samples of input there only half as many frequencies it can represented. The Nyquist limit. So what is normally in the other half of the output? It's the other half of the missing energy but for negative frequencies. The negative frequencies are the same as positive, just generated from something that rotates in the opposite direction.
Ahhh !...Fourier gave us so much to think about. Before we even get to using the thing.
What do we do, if we face a paradox? Mostly we find an explanation, no one can understand, but no own dares to admit.
A simple example, far from being complicated: two equal capacitors, one charged, one empty, connected via an ideal switch and wires without losses and if you close the switch, have of the original energy is gone. And now, where? This is discussed over and over and a lot of imagination is used to explain that.
OK. Here the ft related paradox: you have two different frequencies, let say 1 Hz and 1.1 Hz and we want to know, how much of one frequency "found in the other"
Now, who can give an answer to both questions?
I think we have opposing philosophical views here.
On the one hand you make statements like "there is no frequency" and imply that there is some mystical paradox going on.
On the other hand I see a concrete reality of numbers going in and numbers going out of a computer.
OK, you could go further and say "there no numbers", just different states of transistors forming patterns in a silicon chip. Which we humans just happen to interpret as numbers.
One can make such statements but it's not very helpful. After all we did make those silicon chips to behave in a way that we like and that corresponds to concepts that we humans have.
Thing is Fourier algorithms do work on computers, we do get what we want. There is no magic and no paradox in the machine. It's a real machine working in the real world producing real results.
So, my thesis is that the operation of such an algorithm on such a machine is explainable without the mystical objects of higher mathematics, infinity, integral, complex numbers etc.
I'm not sure I understand your meaning of "found in the other". But it looks like a trick question.
Reality is we have a fixed number of samples of some signal as our input data. In fact forget the signal, we just have a list of numbers representing values at equally separated points in time.
Given that list a few moments reflection will show that the only frequencies that can be represented by such a list are harmonics of the lowest frequency of which we can fit a whole cycle into our sample set. Up to the point where every sample has an opposite value, the Nyquist limit. And zero frequency of course.
So if we have 16 samples taken over one second the only possible frequencies that can be represented are: 0, 1, 2, 3, 4, 5, 6, 7, 8 Hertz.
Often the discrete fourier transform is presented as having the same number of output frequencies as input samples. In which case the output list represents amplitudes for frequencies 0, 1, 2, 3, 4, 5, 6, 7, 8, -7, -6, -5, -4, -3, -2, -1 Hertz.
Hmm...I think that's the right order.
"But what about frequencies in between those harmonics?" I hear you say. Repeatedly now. "My input signal may have those".
Sure your analog input may have those "in between" frequencies. But I maintain that the data samples we are working with cannot represent those frequencies.
In the same way that our input samples cannot represent those little wiggles that occur in our signal in between sample points, it cannot represent any continuum of frequencies either.
Indeed if you sample an analog input signal which is a sine wave at some "in between" frequency you will get a result with the energy shared among neighboring frequency bins in the output and probably some DC level as well.
You are right: numbers in, numbers out. But what has this to do with a frequency? What has this to do with FT? Numbers have a meaning and there are certain relations and a transform can make obvious, what is hidden.
What I showed above is: even numbers follow rules and are not just here.
If you have a number of numbers, representing a signal, you can see, there is already something confusing: number of numbers
Fourier works on a computer, but mostly you do NOT get what you want. You get what you want, if there is a certain problem from which you know, it can be solved using FT, but this is not usual. Only if the bunch of problems you want to solve consists of problems, where FT is a way to solve, you will be successful.
It is as if you say: the sky is full of stars. Yes. But stars are just spots in a dark and mostly the sky is black. Remove the stars, the sky will stay. Reverse?
Look: you say "Reality is we have a fixed number of samples of some signal as our input data. In fact forget the signal, we just have a list of numbers representing values at equally separated points in time."
But, what you don't say and what most people are not aware of: There is one additional condition: If you have n data points, there is a data point n+1 equal to data point 0, n+2 = 2 and so on, over all the times. Only then a FT makes sense.
If this would not be the case, you were never allowed to talk about "frequencies".
What I want to show is: under certain conditions something is true. If the conditions are slightly different, it is no longer true AND not false. Its nearly true. So we do not longer have a binary logic.
To understand fourier transform really, you have to know some rules about it. And one is: ft can express a signal represented by an array of numbers, where this numbers are seen as EQUIDISTANT samples of a function of an variable. The interval defined is one period of an infinite number of periods and only then you can express this interval as a sum of phase shifted sinusoids where there is a fundamental frequency covering 2Pi in said interval and n-1 harmonics.
The point is: there are no frequencies in between those harmonics. It is just this: lets say, the function is a sinusoid of one period in the interval. Than the fourier transform gives a spectrum 1, 0, 0.... 0, 0 , 1.
That is very simple, because there is only one 1 and all the rest is zero. But last element is also 1. Because you know, that the origin was a sinusoid, you know the last element is an "alias".
If now you change the input signal only for one sample, the spectrum will by <> 0 for all frequencies.
We can not decide, if the result comes from a real waveform or from a wrong selection of the interval. That is what I wanted to point to!
There is no frequency means: we define the frequency. We, the observer define, what "is".
We do not know, if the propeller really has 8 cores or if there isn't a single, very fast core emulation 8. Even if you open the housing, you see 8 similar patterns, but this could be mimicry. But there is evidence, that there are 8 cogs: the power, the prize and the spectrum emitted when the chip is running ;-)
You could use a block size that matches both the 1 Hz and 1.1 Hz sinusoids. However, in practice the block is multiplied times a window function to remove this constraint.
The spectrum does include N frequency values. For real-valued data the spectrum is symmetric around the Nyquist frequency (one-half the sampling frequency). In general, this is not the case for complex-valued data. Of course, in the real world we normally deal only with real-valued data.
You are completely right and I said it above: whenever you select an interval, and that is, what you do in having an array of numbers, you implicitly accept, that the single elements of this array are sampled in a moment of time and from sample to sample the time changes for the same distance (or, goes by ...) and that the sampling distance is such dense, that the signal doesn't change wildly from sample to sample. This is, where to start.
Further we imply, that the signal is over and over repeated in time and that there is no discontinuity when changing to the next period, that is, the condition we have from sampled point to point is also valid for the transition to the next interval.
Only if this condition is met, ft brings results, which make sense.
And it doesn't matter, if we are talking about DFT, FFT, or whatever FT, because this is just an information about the implementation and realization of the algorithm. In my eyes, it makes no sense to explain how the algorithm runs, but what its good for. Sure, to me I have not answered the question, what is behind butterfly operation, so what it means to calculate the fft of a 2 sample interval.
As long as I do not see this, I am not satisfied. But that is not my problem.
To me it is funny, that the most simple things can be in the focus of science and answering very simple questions can be extremely difficult and time consuming. But can carry a lot of fruits.
Stripping an adhesive tape from graphite can bring you the Nobel Prize. And that is much more difficult than running a collider ;.)
If you are driving a swing, you are normally not applying a sinusoidal force, but a "push" and the swing does a DFT: selects just their Eigenfrequenz and so gains amplitude push by push.
That really is an example for a resonance. So we see: if a resonator is exited by a pulse, than this pulse contains a frequency determined by the resonator. And that's real. A signal contains a frequency, if a resonator exchanges energy.
FT is a very powerful tool and applied in many applications. Simple and sophisticated ones.
Let's assume, you have a signal and there is a single frequency added, like 60Hz from the grid. You do the FT, you cut away the 60Hz signal and you do the inverse FT. You will get the original signal, but if there was a 60Hz component in this signal, it's missed now. But that may be acceptable.
But instead of revers FT you could do 3 more forward FT's and get the same result. That is funny. Why is it? what does it mean? What is it good for?
FT has another important property:
You know, that log allows you to bring down multiplication to addition, what simplifies calculations.
The same way, FT brings down a convolution to multiplication.
FT allows to calculate a computer tomographies data to get a picture.
And so on.