Shop OBEX P1 Docs P2 Docs Learn Events
Review on Fourier Transform — Parallax Forums

Review on Fourier Transform

When I firstly came in contact with a Fourier Transform, I wrote a straight forward version, which run 3 days on my microcomputer to calculate 512-point spectra. Then I discovered first code for FFT and that brought down the time to seconds. At that time, the Cooley-Tukey algorithm was something from the past and I did not realize that it was created only 15 years before. Now I find current work, where algebra is used to introduce fourier transform and fast algorithms in general. And, as I somehow missed a class, I just don't understand what they are talking about. So I'm searching to find introductionary papers and here is an overview on this topic, from 1990! https://math.berkeley.edu/~strain/273.F10/duhamel.vetterli.fft.review.pdf
As can be seen: FFT is not a single algorithm, but there are many ways, showing pros and cons, and at that time, research didn't reach a final state. I do not know, what happend over the last 25 years.
Some of us are interested in signal processing and with the next gen P new implementations may be possible that allow new applications. Taking in account that there are DAC's to excite a system and ADC's to measure response.
«1

Comments

  • Heater.Heater. Posts: 21,230
    edited 2015-08-22 08:57
    ErNa,

    Ah, happy days:
    http://forums.parallax.com/discussion/127306/fourier-for-dummies-under-construction/p1
    http://forums.parallax.com/discussion/128292/heater-s-fast-fourier-transform/p1

    The FFT gives me headache. It took me 24 years from first seeing the source code of an FFT to gaining enough understanding of how it works to be able to write one for myself in C from scratch. Subsequently in PASM for the Propeller.

    Despite studying the Fourier transform years before the FFT was unfathomable. I mean, for example, what's all that "bit reversal" thing about? Crazy stuff.

    Many thanks are owed to those who gave enough hints and tips on those threads for the penny to drop here.

    We have been looking forward to moving that PASM FFT to the Propeller II ever since....
  • ErNaErNa Posts: 1,742
    Hi Hearter, I remember that discussions, I never used FFT on a propeller, as I had no application. To me only one FFT existed, now I see, there are so many ways, and available hardware decided, which algo is best. But the paper is about 25 years old and there is one chapter on polynom tranform. In this field it seems that progress is made, but I have to learn about algebra to have a chance to follow. ;-( But this old paper shows some drawings that may help to understand the principle. In fact, I already have problems to understand a 2x2 complex transform :-)
  • Heater.Heater. Posts: 21,230
    ErNa,

    I think half the fight with understanding the FFT is this business of complex numbers.

    First off we find we have to deal with the "square root of -1" and the whole language used makes you nervous "imaginary" numbers "complex" numbers. All sounds far out there.

    I'm pretty certain we can do all this without ever mentioning the square root of -1 or "complex" or "imaginary" or other scary stuff. Something like this:

    Instead of talking about "numbers" lets talk about "values". Like variables in a program "values" are not numbers they are places where you put numbers.

    Normally we assume that values are numbers like 1, 2, 3, 3.14159 and so on. But let's instead say that all values are made of two numbers. We could write values like [a, b] or [3, 9] rather like we write arrays or structures in programming. Think of "values" as those x, y points on a normal graph as you were taught about in junior school.

    Normally we think of addition of values as adding numbers, "1 + 2" or "x + y". Here we want to define addition for our values that are made of two numbers each. Easy let's define addition of values as:

    [a, b] + [c, d] = [a + c, b + d]

    Normally we think of multiplication as multiplying numbers, "2 * 3" or "x * y". Here we want to define multiplication for values that are made of two numbers each. Easy let's define it as:

    [a, b] * [c, d] = [ac - bd, bc + ad]

    That might look like a weird rule for multiplication to anyone who has done any algebra. It's not weird it's just a different rule !

    Finally we can define a "size" or "magnitude" for our values. The size of [a, b] is the square root of a*a + b*b. Just like Pythagoras giving us the length of the line from the origin on our graph to whatever point we have.

    With a few examples and messing around we can show that our "weird" rule for multiplication gives rise to rotating points around the origin. That is to say the sines and cosines we need for analysing frequencies in a signal. Also that the size is exactly the amplitude of the signals we are interested in.

    I reckon with this approach we can explain the Fourier transform and the FFT without ever mentioning "complex", "imaginary", "i", or any scary stuff like "e to the power i*PI*n..." and so on. We are just mashing around with sines and cosines as represented by our values [a, b].
  • ErNaErNa Posts: 1,742
    The existence of imaginary numbers is confusing, I agree. That is where knowledge has to become public domain again. I think, most people, me included, believed math to be something that just exists and we only have to learn and understand. But it turns out, that math is artifical and every day new knowlegde is generated like bug free software, so progress is small and most effort is spent to prove correctness.
    So, all numbers are imaginary. In the way: here are items, by grouping and joining we do not change the amount, only the place, ownership etc and now we invent delegates, that allow us to do calculus and simulate real items behaviour. If we create such a math, this math can not reflect the situation where a male item and a female item is grouped and the number of items suddenly changes. In this case, new math has to be invented.
    To start with the natural number: Initialy we answer the famous question: to be or not to be inventing code elements zero and one (as an appreviation of not zero). Then we define operations like add and multiply and so we can create a first math that allows to have more numbers by adding a one to the zero and continue to do so.
    The point is, that time passes by, what we can not influence. So the next number is generated a little later as addition takes time. So the question can be raised: if we create a new number by multiplication, do all the numbers in between already exist? Or do we have to increment first to create all numbers and later use them for multiplication purposes?
    I just want to make clear to me: all the numbers I know are imaginary, so I do not have to fear imaginary numbers.
    I have to confess: I made a mistake when I tried to understand the difference between a complex number and a two dimensional vector of reals. That resulted in some disputes, but it is solved now, I learned a lesson.
  • Heater.Heater. Posts: 21,230
    Ha yes, we can get very philosophical about numbers. Clearly they do not exist. They are constructs of our minds.

    But wait, we exist as physical entities. Our physical entity has a brain. Which contains concepts, such as numbers. We know that because we use and talk about numbers all the time. Those concepts are physical states of our brains. Ergo numbers exist.

    The issue with complex numbers is simply the words "complex", "imaginary" and that cursed square root of minus 1. Immediately gives the impression of some deep, unfathomable, mathematical thing that is impossible for mortals to understand. Very off putting.

    All we are doing is setting up a representation of values and defining simple operations on those values that makes it easy to work with sin and cos at the same time.

    Clearly the pairs of numbers "complex" values are the not the same as 2D vectors. Both complex values and 2D vectors have the operations of add, multiply etc defined. The rules for multiplying 2 complex numbers are not the same as the rules for multiplying 2D vectors.





  • ErNaErNa Posts: 1,742
    We use words we are used to use. I had to lookup what "curse" means, my first association was "cursor". Now, thanks LEO, I understand.
    So give i a chance.
    We created 3 by adding 1 to 2. Imagine to go back in time. Now we create 2 by substracting 1 from three. And 1 from substracting 1 from 2. Then we eliminate the 1 by substracting 1 from 1. And get nothing. And now we realize: nothing is not nothing! 0 exists as a concept. But what, if we substract 1 from 0?
    Now we create something, that didn't exist before. But that is what we are used to, so it should not be surprising. The 3 didn't exist before we first added 1 to 2.
    What I tried to show: to create new numbers we need (use) time. As time goes by in a natural way, we do not use the term "time travel). By (impossible) time travelling (we imply "back in time") and using the concept reversed, we create a new type of numbers: negative integers. So we are able to create negative numbers but not to travel back in time, this only works if we find a concept to travel backward backward in time. Funny.
    By the way: using fourier transform brings us from the time based signal to frequency space. Applying it again, we come back to time, but the signal is reversed! And so is the time axis. Applying FT again, we now are in Frequency, but sin and cos are exchanged. After the fourth transformation we are back where we started. Applied time travel.

    In a similar way we create fractions if we invers multiplication. We apply inverse multiplication (e.g.division) to numbers we could not create by multiplication. Those can be expressed by (p/q), p, q being integers. But there is a special group of multiplications: p*p. And we define a special name for finding the answer to the question: which number I have to multiply with itself ( or do I need two instances of the concept "a number") to get another number. This is simple in the case of 4: the answer is 2, but also simple for the case of 2: the answer is: it is the number that multiplied with itself ( appreviated: squared) results in 2. We call this number the square root of 2
    And what about -2 ?
    It turns out: we easily apply a trick: every negative number is the product of a positive number, multiplied by -1. And somehow we already know: sqrt(-2) = sqrt(2) * sqrt(-1) and this shrinks the problem to "inventing" or "imagining" a number where the square is -1. This number we call "i*1" or, simplified "i".
  • After reading this my headache is back in remembering complex imaginery numbers in college and with a slide rule to boot. Never used them once I got a job..
  • Heater.Heater. Posts: 21,230
    KMyers,

    Oh shoot. Then I have failed. My plan is to be able to introduce people to signals and the Fourier Transform and hopefully explain the Fast Fourier Transform without ever mentioning "complex" numbers, or sqrt(-1), or Euler's identity and all that scary stuff.

    I'm hoping it's possible with just high school algebra, sin and cos and Pythagoras.

    Oddly enough in all the years I studied maths in school, then technical school, then university we never needed a slide rule or calculator. The point was to learn the proofs and techniques not crunch numbers. It was sufficient to be able to show that bla, bla = 1/sqrt(2) or whatever. The actual numerical result was no important.

    However, the slide rule was great in Physics. When you were faking the results of that experiment you had not done it introduced just the right kind of errors to the results! Even if you had done the experiment the slide rule always reminded you to not quote more digits of accuracy than your measurements could possibly justify.
  • kwinnkwinn Posts: 8,697
    Heater. wrote: »
    ................
    ..............................
    However, the slide rule was great in Physics. When you were faking the results of that experiment you had not done it introduced just the right kind of errors to the results! Even if you had done the experiment the slide rule always reminded you to not quote more digits of accuracy than your measurements could possibly justify.

    This reminds me of one of my university profs. One of my class mates bought an early ($300.00+, 4 function) calculator and used it for a test. His answers showed as many decimal places as the calculator provided. For one question the answer was 5.0 but his calculator showed 4.9999 so that's what he put on his paper.

    Since the question was worth 5 points the professor gave him 4.9999 and the whole class a lecture on measurement and calculation inaccuracies.



  • Heater.Heater. Posts: 21,230
    kwinn,

    That prof was very generous.

    My experience was that with every experiment one had to to provide an error estimation. 1% error in temperature measurement, 0.1% error in current measurement, whatever. If the end result could claim even one digit correct that was good!

    That meant that anyone quoting a result to 8 significant digits was marked down a lot. It was a common thing to happen when those calculators hit the scene.

  • LOL...

    I did get sorta failiar with fourier transforms in RF with a spectrum analyzer but did not EVER re do the math.

    Had a regional engineer that even after the rpn calculators were out still carried a slide rule. The accuracy was there. Old habits die hard, still have one around here someplace with a peace sticker on it..

    Showing my age unfortunately...
  • kwinnkwinn Posts: 8,697
    Heater. wrote: »
    kwinn,

    That prof was very generous.

    My experience was that with every experiment one had to to provide an error estimation. 1% error in temperature measurement, 0.1% error in current measurement, whatever. If the end result could claim even one digit correct that was good!

    That meant that anyone quoting a result to 8 significant digits was marked down a lot. It was a common thing to happen when those calculators hit the scene.

    He was not only generous, he was a great teacher and had a good if somewhat quirky sense of humor. Always enjoyed his classes.

    Error estimation came a little bit later and then the marking got a lot tougher.
  • For those looking for more on FFTs, check out the following:

    http://www.dspguide.com/pdfbook.htm

    I found it particularly good at explaining what's really happening with an FFT (an a lot of other transforms) without requiring complex math. I strongly recommend reading it if this is a topic you are interested in.
  • Heater.Heater. Posts: 21,230
    Searith,

    Thanks for the dspguide link. That is a great resource. Love the example code in FORTRAN and BASIC !

    Sadly I don't think it really gets to the bottom of the problem of understanding how the FFT actually works. As the author says:

    Don't worry if the details elude you; few scientists and engineers that use the FFT could write the program from scratch.

    Perhaps there is enough there if one is fluent in complex maths and smart enough. That was not me :depressed: Took me decades between seeing BASIC code like that and being able to write my own FFT from scratch, with hints and tips from forum members along the way.

    I do like his explanation of how the bit reversal sorting works and why it comes into the picture.


  • ErNaErNa Posts: 1,742
    don't fear the "i"
    I tried to show that numbers are just symbols that are somehow related to real objects. And natural numbers, integers (including negative values), fractions, irrationals, ... they only expand the range of numbers according to new requirements. Many have problems to understand that division can create bigger numbers: 1 / (.5) = 2 This is a hurdle to every beginner, but ones overcome, it becomes "heritage".
    So "i" is just the symbol for the number which, when multiplied by itself is -1. Not more, not less. And as this number is different from the real number ( the difference is: squaring a real results in a positive number), adding them does not result in another real.
    To show the difference: 1 + 2 = 3. That can be seen as an operation: You have one, you add 2, you get 3. Or it can be an identity: the left is identical to the right.
    Now the case when "i" enters the stage:

    1 + 2i = 1 + 2i

    Now you can not express the addition as a symbol! There is no single item to carry the information that a first value 1 was added to a second value 2i. But there is a simple way to create such a symbol:

    1 + 2i = (1 + 2i)

    Unite them by puting in parentheses. Now you have a number that contains two properties. That all. Up to now.
  • ErNaErNa Posts: 1,742
    edited 2015-08-24 10:05
    By the way: I looked to the DSP-guide and certainly it is worth reading. My advice: read from the end. Then complex numbers are introduced quite early and that will make life easier. It is funny: ones you get the concept of complex numbers, you are comfortable. The easy way is much longer and in the end, you can get no satisfaction. This tells somebody who took the easy way and now knows: that you should never do!
  • Heater.Heater. Posts: 21,230
    edited 2015-08-24 09:43
    Problem is there is no end to that book on line all the sections on complex maths are "Available soon". As are other chapters.

    Edit: Ah, OK those sections are in the PDF download.
  • ErNaErNa Posts: 1,742
    As I try to express the topic in normal language (denglish, not math) I find myself in a mess. Searching help in Wikipedia "vector space" shows that there is a lot to know, so I'm bringing it down to my misconception. I thought, scalar product is included in vector space definition. But now I see: only addition and scalar multiplication is needed to create a vector space. As I understand, confusion comes from the fact, that the concept of vectors and complex numbers is not clearly understood and so they are often mixed in an inappropriate way!

    Imagine an object. Don't specify it! Just "something".
    Now create a hyper-object by grouping a certain amount of objects in the way, that this hyper object has a first, second, .... nth component.

    Now imagine to have two of those hyperobjects. And you want to add them. The rule is: add the first element of the first hyper object to the first element of the second hyper object etc and this results in a third hyper object, which is the sum of the two.
    Realize: we did not describe, how "adding the components" is actually done. But that doesn't matter as we know: a + b = (a+b).

    And now go a step back: imagine to have a very simple object: just a number. (Precisely: a real number R). And now you define a multiplication: r * hyperobject. This multiplication is done the way that every element of the hyper object is increased by R. R + (HO) = (R*first component, R* second component, ,, etc)

    Now we have a vector space!
    The vector is identified as the hyper object.

    And now let us talk about the components of the vector.
    First: we never said, that all components have to be of the same species!
    Second: to make it not more complicated than neccessary: all components are of the same species!

    Let the component be a real number R.

    Then a vector looks like this (R1, R2, R3, ....) as many components as we like.

    And let us limit the number of components to 2 and name them X and Y. Now we feel familiar and imagine a plane in X and Y

    V = ( X, Y )

    But what if the components are not just real numbers, but more complicated aggregats? Let us make a choice: We take complex numbers. The first complex number shall be: (X + iQ), the second (R + iY)

    Now the vector looks like this:

    V = ( (X + iQ), (R + iY) )

    This is a two dimensional vector with complex numbers as components. Remember: there is only addition and scalar multiplication defined. No scalar product, no cross product, whatever it is, it is not!

    I admit, this is difficult to imagine. A plane, where the coordinates are complex numbers. Impossible to imagine, but simple to define.

    But what, if we reduce our requirements a little. Let us remove all the vectors where the first component has an value of Q not equal to zero. Then only the vectors

    V = ( (X + i*0), (R + iY) ) remain, and we can just remove the now unneccessary parts (as we know, Q == 0)

    V = ( X , (R + iY) )

    The first component still is complex, only the value of the imaginary part is zero. Like zero still is a number (knowledge we inherited from the arabs).

    Now we are able to "imagine" such a vector as we are used to think in three spacial dimensions and we write the vector in the form

    V = ( X , R, Y ) Think about that!

    But let us continue and select only such vectors, where the second component has the real value R = 0

    V = ( X , (0 + iY) )
    We appriviate this to
    V = ( X , iY )

    And as by definition, the second component only has an imaginary part (which could be zero too), but real part equal 0 by guarantee, we also can omit the "i".

    Now we have a vector V = ( X, Y) where the first component is real and the second imaginary because the first components imaginary and the second components real part equals zero!

    This is exactly the way how complex numbers are mapped to 2-dim vectors!

    Thank you for watching, now I am quite clear!











  • Heater.Heater. Posts: 21,230
    edited 2015-08-24 14:38
    ErNa,

    I don't fear the "i", up to a point. I'm happy with "cos + i sin" and "e to the iπ". Enough for the FFT anyway.

    Still, I'd like to arrive at a coherent explanation of how the FFT works with out mentioning "i".

    I was just thinking that the first hurdle for those not used to "i" and friends is the way they will be thinking about the input signal. We've all been through the business of drawing graphs of sine waves school. We see sine waves on oscilloscopes and such. We conceive of our signal, that sine, being some single value that changes in time.

    But to get into the FFT you have to think of the signal differently. There is that pesky imaginary thing hanging around in the background and some how representing the phase of our signal. Very mysterious.

    Right from the outset should be the idea that no, your signal is actually made of 2 component values all along. That which you see on the scope or draw on a graph is only the result of those two components, their magnitude.

    Looked at this way we start to think that the so called "real part" and "imaginary part" are neither real or imaginary they are both the same, just orthogonal. As long as you have both, eg 1 + 2i you have no concrete value. The one part is no more real or imaginary than the other part. The magnitude of "1 + 2i" may be the "real" result you want,

    I wish I could write all this down a bit more formally.
    Now you can not express the addition as a symbol! There is no single item to carry the information that a first
    value 1 was added to a second value 2i. But there is a simple way to create such a symbol:

    1 + 2i = (1 + 2i)

    Unite them by puting in parentheses. Now you have a number that contains two properties. That all. Up to now
    I think this is why I rebel against the "i" business sometimes. You are quite right one cannot add the real and imaginary parts of "1 + 2i", ever. But there it is, that "+", a symbol for an operation we can never actually do, crazy! That "+" serves no purpose except to glue the two components of the value together as we manipulate calculations.

    The best we can hope for is that the pesky "i" get's carried around in out calculations and eventually gets cancelled out or we just kill it off by taking the magnitude as our final result, which is always a single "real" number.
  • ErNaErNa Posts: 1,742
    Coming from the analog world I knew spectrum analyzers as contiuously scanning receivers. Later I heard the term Fourier analysis and much later I understood (by myself) that a signal over time can be seen as a vector, where the components are the sampled values. Now the question is: what is the orthonormal base of such a vector. And the answer: the base vectors are moments in time, represented by vectors (0,0,... 1, ---0,0,0) so if my signal has 1024 points, the vector space has 1024 dimensions!
    And that is the point. You have to accept, that you can actually "see" and handle a vector in a high dimensional vector space. As most people fear to have more than 3 dimensions and then there is this mystery of space-time, they do not accept that they can imagine high dimensional spaces.
  • Looks like I will have to download that pdf. Thanks for the link!

    Now where is the asprin?
  • Heater. wrote: »
    Searith,

    Thanks for the dspguide link. That is a great resource. Love the example code in FORTRAN and BASIC !

    Sadly I don't think it really gets to the bottom of the problem of understanding how the FFT actually works. As the author says:

    Don't worry if the details elude you; few scientists and engineers that use the FFT could write the program from scratch.

    Perhaps there is enough there if one is fluent in complex maths and smart enough. That was not me :depressed: Took me decades between seeing BASIC code like that and being able to write my own FFT from scratch, with hints and tips from forum members along the way.

    I do like his explanation of how the bit reversal sorting works and why it comes into the picture.

    You have to read the book twice, I think. The first time through, you get to understand DFTs, convolution, and all that pretty well. The FFT part makes a lot of sense in the context of reading the prior material, but it still feels a bit... mystical. However, when you read through the book the second time, you start to get an even better understanding of the DFT (etc) because of the FFT section. Then, by the time you get to the FFT section the second time, it really makes a lot of sense.

    A co-worker of mine, after having done two read-throughs, was able to sit down and write an FFT without reference. I haven't tried myself (and it's been a few years since I read the book), so YMMV. Still, if you are willing to take the time to start at the beginning of the book, I think it's one of the better explanations that I have read (on the topic, not just FFTs).

  • I have always found the discrete cosine transfer to be easier to understand / code. I usually recommend that people try that first, then move on to the D/FFT. For the ultimate in "easy-to-implement-and-sort-of-like-the-FFT", you can look at the Walsh-Hadamard transform (fast or regular).

    Jonathan
  • ErNaErNa Posts: 1,742
    edited 2015-08-25 06:41
    As I only used FFT to analyze a logged signal, what is the difference between FFT <-> DCT <-> DST ?
    And another question: anybody here who can not see a signal as a vector?
  • Heater.Heater. Posts: 21,230
    edited 2015-08-25 07:12
    ErNa,
    anybody here who can not see a signal as a vector?

    Here is a nice thought experiment to help see a signal as a vector, for those who have never done that before.

    Take a one inch wide strip of paper. Hold it stretched out in front of you. Put a few twists in it. What do you see? Nice sine waves. As the strip twists you see it flat on, 1 inch wide, then edge on, zero width, alternately.

    Now hold the strip such that you are looking down one end of it. What do you see? The edge of the paper at the end of the strip. A line, perhaps horizontal, perhaps vertical, perhaps at any other random angle you happen to be holding it at.

    We can of course express the angle of that line as a measurement of the distance it covers vertically in space ,height, and horizontally, width. We have tan(angle) = height/width or sin(angle) = height / 1 or cos(angle) = width / 1. (We have 1 there as the paper is 1" wide and we are working in inches)

    What we have is a vector comprised of width and height.

    Clearly as we move down the paper strip that vector rotates as the strip twists.

    So there we have it, when you look at that sine wave on a scope or graph what you are looking at is only the side on view of a vector rotating in time. Like the twists in that paper strip. If one could somehow look down the end of that scope trace one might see a line, a vector, rotating as the signal comes towards you.

    The next leap of faith is to see that any signal, of any waveform, is only a collection of such vectors rotating at different rates and with different amplitudes. But that is another story...





  • ErNaErNa Posts: 1,742
    This is very interesting, as now I see where we talk of the same and where from different concepts. I will detail soon
  • Heater.Heater. Posts: 21,230
    Erna,

    You mentioned "vector spaces". I'm afraid this may confuse the issue. Or me at least :)

    A "vector space" is not a "vector". Normally people think of vectors as representing something that has a magnitude and a direction. Like a force vector. As such we end up with a list of real numbers, 2 for 2 dimensions, 3 for 3 dimensions and so on. I believe these are call Euclidian vectors.

    The "vector space" adds more abstraction. As you say it is a container of objects that may be real, or complex or even functions. Together with a bunch of rules for addition and multiplication. Of course a Eulidean vector can be a vector space, as can a single real or complex number.

    Vector spaces are used a lot in Quantum mechanics. Recently I was watching Prof Leonard Susskind's lectures on quantum mechanics on YouTube. He strongly emphasises that he will be using the word "vector" to mean these two different things and that the student should not be confused by that. Wonderful lectures by the way, highly recommended

    Susskind has a series called "quantum entanglements" which is part of MIT's Continuing Education program. Susskind referred to it once as "Quantum Mechanics for old farts" :) Which is just as well because when we studied QM in uni it was all done with calculus. I don't recall seeing any of that vector space stuff in there.






  • ErNaErNa Posts: 1,742
    OK, step by step we make progress. No, a vector space is nothing to confuse. A vector space is just a container for vectors. Take the most simple case: a one dimensional vector space. To visualize this vector space you can draw a line. A vector is something like (5). One question: what does "5" mean. 5 in this case is a real number. That implies that all the other real numbers exist. We know that real numbers only make sense, when there is addition and multiplication defined. In the same way a vector only makes sense, when addition and multiplication are defined. We do it this way: (a) + (b) = (a+b). That is: we reduce the addition of two vectors to the addition of two integers. Multiplication is a little different: (a)*(b) is NOT defined!
    But a * (b) is defined to be (a+b), that is, the multiplication is an appreviation of multiple additions of vectors, even fractions are allowed.
    By add and multiply new vectors are created and the container for those vectors is the vector space. That is all!
    As everything we can do with an one-dimensional vector space can be done in reals number space too, vectors are just boring.
    This changes when the vector becomes more complex (a0, a1). Now addition works as follows: (a0, a1) + (b0, b1) = (a0+b0, a+b1) ands multiplication: c*(a0, a1) = (c*a0, c*a1)
    That is all. Now the vector space is no longer represented by a 1dimensional line, but by a 2-dimensional plane.
    Now we once have created this mechanism, we can continue to a three dimensional vector in a 3-dim space, ... The math stays simple and it doesn't matter if we can imagine a higher dimensional space or not.
  • Heater.Heater. Posts: 21,230
    The confusion arises because the term "vector" is overloaded. People will use "vector" when casually referring to vector spaces or spatial vectors. This is the confusion Susskind warns his students about.

    Further confusion arises because the whole idea of "vector spaces" is not required to describe the operation of the Fourier Transform or FFT. Although I believe that can be done as well.
  • ErNaErNa Posts: 1,742
    edited 2015-08-25 16:21
    I until short had also a wrong conception on vector spaces. I though, the scalar product is an integral part of a vector space. But NO. Only addition and scalar multiplication. If you read here https://en.wikipedia.org/wiki/Vector_space just the first sentence, it becomes clear.

    Then there are extensions. But they EXPAND the concept of vectorspace and the newly created spaces having more properties are also named differently. The problem is that education somehow fails, because, as you also stated, misconception is omnipresent.

    So when we discuss fourier transform we should only use terms in the right way and only to the grade necessary.

    while the scalar multiply multiplies a vector by a something that is not a vector (normally a real number) from "external" (external product) there is another way to multiply: scalar product (or inner product).
    The rule for this scalar product is: multiply the components of the vector and add them all up. ( a0, a1) . (b0, b1) = a0*b0 + a1*b1
    If there are more components (a.k. dimensions) just add them up, nothing changes.

    I believe, because the scalar product is used for so many purposes, most elementary: determine length of a vector, I thought this scalar product is an integral part of the definition of a vector space. But now I realized: it is not, it is a first extention.
Sign In or Register to comment.