Shop OBEX P1 Docs P2 Docs Learn Events
Complex Numbers - Page 2 — Parallax Forums

Complex Numbers

2

Comments

  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2015-12-06 14:32
    He introduces "Tau".

    Oh my, that takes be off into the whole Golden Section world of spirals and icosohedrons, dodecahedrons, and so one. It ties into having a circle divided evenly by fifths or 72 degrees -- a whole different world from Cartesian x, y, and z axis. The best one can do is have six axises representing a sphere constructed of tetrahedrons.

    Whoops, I was all wrong about Tau. It is just 2 Pi.... nothing to do with the Golden Section, which uses Phi. Obviously, my Greek is not so precise.

    Watched Lesson 6, but jumped around. I certainly feel that he is doing a good job of presenting it all, but it is so slow..... I'd still rather read text.

    I am very happy to see that he is clear about the problem of too many names for the same thing. I just added that there many different notations for the same thing as well.

    Math has a lot of names for the same thing - that is a semantics problem. And Math has a lot of symbols and notations that represent the same thing - that is a semiotics probelm.

    Chapter 7 actually applies Complex Numbers to Software Define Radio. I might actually learn something about useful programing.
  • ErNaErNa Posts: 1,738
    No, I didn't mean to start with the first video, I just wanted to say: if only we follow the introduction of natural numbers, then those are arranged in a line equidistant. Why? There is no must! If we start with a minimum, we need a zero element, a one element and "+" operation. The rules are: zero "+" zero stays zero, zero "+" one equals 1. And one "+" one equals something else we have to find a name for, one possibility: one+one. Another: two. And so on. So we have a generator that creates the next number by adding the same element "one" to the element, that was created last. But when we write the numbers in a row, there is no need to write them in equal distance, what actually is not possible! Because every number needs an identifyer and the more numbers you generate, the more place is needed to place the identifier. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14... 99 100 ...
    But if you omit to have shortcuts for addition, that is: add 1 means: go to the next, add 4 means repeat go to the next 4 times and NOT go a distance of 4, then even non equidistant spacing of the numbers on a line doesn't matter.

    We see: it is more or less not simple hanging around with math!
  • Heater.Heater. Posts: 21,230
    edited 2015-12-06 17:04
    Wait a minute.

    Surely we have to assume some conceptual equidistance of something between adjacent integers. Otherwise 5-4 would not necessarily be the same as 10-9.

    Or, what we are doing with integers is trying to count things. If those things are not identical it is meaningless to be counting them together. 2 apples + 2 apples is 4 apples. 2 apples + 2 oranges is what?

    One abstraction of this is those equidistant marks the Greeks were fond of marking off with a compass along a straight line.

    Of course the length of the symbols we use to represent a integer. "1", "309", "897492874" is beside the point. I could choose to label them in reverse order for example. Or just stipulate the text size has to get smaller as we go up to compensate.

    Before the Indians invented zero we were counting quite happily without it. When the Greeks marked off a point on a line with a compass, starting at the end of the line, they called that line segment "1". It was the line segment that had significance not the point they had marked off. Zero did not exist in that view!

  • ErNaErNa Posts: 1,738
    Heater. wrote: »
    Wait a minute.
    Surely we have to assume some conceptual equidistance of something between adjacent integers. Otherwise 5-4 would not necessarily be the same as 10-9.
    No, not necessarily. 5-4 is an appreviation for Add 5 times 1 to zero and than substract 4 times 1. And to see, if 5-4 equals 10-9 you have to add 10 times 1, substract 9 times one, substract 5 times one and add 4 times one. If this results in zero, then 5-4 is equal 10-9.
    This is similar to the fact, that a Turing machine can do every computation with simple operations, but a lot of them.

    Therefor we have to agree on the basics of our efforts and only then it makes sense to discuss questions of math.
    By the way: you can do a fourier transform without imaginary or complex numbers. Just make a discrete FT for sinus and cosinus separately. The trick with complex numbers is: it can be done faster.
    By the way: If I understood correctly, there was an discussion in the math community, if knowlegde is increased, if a second way is found to proof a theorem. And this question is not stupid. Because, if a certain knowledge is needed to proof a theorem, and you find a more simple way to get this result, the next generation may only know the proven theorem, a simple way to do the proof, and miss the higher level knowledge. That is, if a second way is found, the knowledge may actually decrease!
  • I think Erna is expressing dislike that it is too informal. The presentation of the line in number theory skips over zero, infinity, and negative infinity. Good math lecturers cover it all at a rapid pace, no chatty mucking around.

    I found the beginning of Lesson Six a bit dry and boring. The best part was his honesty in saying that the first time around, he thought that learning Complex Numbers was likely something that he would just set aside and never use. Only later in life did he realize he needed to re-learn them.

    I had the same learning experience and I suspect most of us have unless they were on a fast track deep into science or engineering at MIT.

    +++++++++
    Every teacher and presenter is working against a clock. It looks like his lessons are intended to run roughly and hour. And every teacher and presenter takes a big leap at the beginning and hopes that the entry point is right for the audience and won't fall apart. Some days are good, some days are excellent, and other days are not good or dismal. I have done thousand of classes, so I am sure about this, teaching is simply a performance art. I hope this wasn't his best. The Lesson One is very slow and really too long for an Introduction. He could have had some detailed previews of highlights of what is to come.

    Having said all that Lesson Six starts out slow more me personally. Some points were made that were interesting, but took a lot to develop and may only interest newbies. The stuff about Complex Numbers was good, and I really liked that he mentioned the problems with multiple choices of names. The shift from Pi to Tau is an interesting wind up, new information. I can't say it is really a bit of significant progress in maths or just a gimmick. I have never thought about it. He didn't seem to provide salient advatages.

    But I admit that I simply started out with a dislike of video teaching like this. I'd rather have it written (I know I have said that enough). I will look at the text on Tau to learn more. I gave up on Lesson Seven when I realized it was going to be a Python demonstration. I was hoping for more generic programing tips, and some overview. Something may be in there, but I need to go back and see.

    When I walked the dog this evening I pondered my concerns that video presentations and Power Point presentations seem to be making people dumber and learn slower. I had a huge reading load in university and learned to read fast. Videos won't allow you to speed up. It is like the shift from slide rules to calculators and finally spreadsheets. A lot of mental skills no longer get any great exercise.

    But I guess that is me just being the grumpy old man. In my heart, I hope that I have said something in this thread that will help someone else make progress with Complex and Imaginary Numbers as moved ahead quite a bit by sharing my thoughts.

    When I really want to do some serious learning, I go to a university text book store of a top rate university and see what the courses are using. I don't waste time with 'Complex Numbers for the Complete Idiot".
  • Heater.Heater. Posts: 21,230
    ErNa,
    No, not necessarily. 5-4 is an appreviation for Add 5 times 1 to zero and than substract 4 times 1. And to see, if 5-4 equals 10-9 you have to add 10 times 1, substract 9 times one, substract 5 times one and add 4 times one. If this results in zero, then 5-4 is equal 10-9.
    In that one paragraph you have disagreed with me. Then demonstrated why I'm right! :)

    In order for the procedure you have specified to work all those ones I add and subtract had better be the same "size". And all the gaps between neighbouring integers had better be the same size. Else I'm never going to get to the target of zero.

    We represent this "same sizedness" by drawing all the like segments the same length. Perhaps even constructing them with a compass set to some fixed distance. But that drawing is just a representation of the underlying concept of integers. We could do this with a bucket of pebbles, for example, putting pebbles in and taking pebbles out. Hopefully finishing with an empty bucket. Of course our pebbles may all physically be different sizes, weights, shapes, colours etc. That is of no concern, each one represents the same "oneness".

    Or, I'm totally missing your point :)

    Yes, we don't need complex numbers to do Fourier transforms. My contention is that we don't need complex numbers to explain the Fast Fourier Transform either. After all Propellers only deal with signed integers but they can do FFT.

    (When I say "complex numbers" above I'm referring to all that i, e, and Euler equation stuff. We will need a number pairs for our samples, intermediate values and results)

    That's an interesting point about proofs. My guess is that if you have more than one way to prove the same thing, that in itself is knowledge. Perhaps the techniques used in each of the proofs can be used to prove other things that may not have come to mind without them. So it looks like it's best to learn all the proofs! But what do I know.


  • ErNaErNa Posts: 1,738
    Heater. wrote: »
    ErNa,
    In that one paragraph you have disagreed with me. Then demonstrated why I'm right! :)

    Ok, I didn't hit the nail. I should say: if generating the next number by adding 1, and reducing all the math to go repeatedly forward and backward, there is no longer need to have the same distance, but it is sufficient to go one step, but this step may have varying length. So we are right again. You rely on the steps being equal, I rely I never miss a step. In the end, there is different conception, but equal result.
    The funny thing is, that scientists had these ideas very early. Gauss said: the integral over a closed surface in space is equal to the charge inside. Maxwell so said: the electric field passing a sphere in space is equal to the charge enclosed. If the sphere shrinks and the integral doesn't change, and you can shrink the sphere more and more, you know, the charge expands less than the diameter of the smallest sphere that shows the same integral. He didn't know about electrons. So he discribed electrical systems by a field and charges only were ideas. Later the electron was uncovered as a partical and now the particles were the "real" things, while the field was "imaginary". Later there was the questions: can there be a geometry without an existing space. Answering "YES" opened the path to higher dimensional spaces in geometry, the calculus on manifolds was invented. But still it is difficult to imagine a sphere surface without existence of inner and outer space. And an Universe, with nothing around.

  • Heater.Heater. Posts: 21,230
    ErNa,
    ...there is no longer need to have the same distance, but it is sufficient to go one step, but this step may have varying length.
    I think I see our difference of view now.

    The way I see it it is the concept of the "step" that is fundamental when counting up and down. Those steps may have different lengths but from where I'm looking its the step that is the "length" and that step is one, and all such steps are the same.

    As you are hinting at, from the point of view of Euclidean geometry all those steps have the same length. The size of the integers and the distance covered are proportional and every "one" is the same length. But move to a non-euclidian geometry and that may not be true any more. One starts to have to be careful about what one means by "distance" or "length".
    The funny thing is, that scientists had these ideas very early.
    Yep, it's an odd thing that scientists intuit the way things work out, even in quite mathematical ways, guided by experimental results, even when they don't have the mathematical chops required for what mathematicians regard as rigorous proofs or are even aware of the problem. Maxwell formalised what Faraday had arrived at for example.

    If you have time for it here is a brilliant video where a modern day example of physicists getting there before mathematicians is described: "The Unreasonable Effectiveness of Quantum Physics in Modern Mathematics" by Robbert Dijkgraaf https://www.youtube.com/watch?v=6oWLIVNI6VA The bottom line of the story is stunning.



  • potatohead wrote: »

    @potatohead

    Thanks, that may help some. I guess I have just been enjoying my rut. Actually, nobody in Taiwan talks computers with me.

  • ErNaErNa Posts: 1,738
    Very interesting lecture of Robbert Dijkgraaf. I feel sorry, I have wether time nor skills to follow this more deeply. But the pictures are nice!
  • Heater.Heater. Posts: 21,230
    edited 2015-12-09 11:26
    Loopy,
    ...first part of a complex number represents Amplitude (Voltage), while the so-called 'imaginary number' part represents the phase shift (Current).
    Over the years I have read such statements many times. It's probably contributed more than anything else to slowing down my understanding of how the FFT works. It's basically wrong.

    The idea seems to be that the real part contains, say, the samples of a signal as actually measured by you ADC or whatever. The amplitude. While the imaginary part some how magically encodes a phase shift. With respect to what we don't know.

    Now, in a common or garden FFT one often enters the measured values into the real part and sets the imaginary parts to zero. So it looks like the above statement is true.

    In reality the amplitude of our signal is represented by both the real and imaginary parts. If the real part is x and the imaginary part is y then:

    amplitude = sqrt(x² + y²)

    And the phase is represented by the angle of that vector [x, y] w.r.t the real axis. For example:

    phase = arctan (x/y)

    So we see both amplitude and phase are represented at the same time by both the real and imaginary parts. Both the real and imaginary parts are required. I would say that neither is more "real" or "imaginary" than the other.

    The case we had initially with the signal in the real part and the imaginary part set to zero is a special case of zero phase shift. It would be quite reasonable to enter a signal into an FFT as both real and imaginary parts.

    As the guy from Hack RF says in his video, look at a signal as a helix, not just a sine wave. The amplitude is the radius of the helix, the phase is the angle you make from the axis to the helix.

    I would like to go back to all the books and articles I have ever read that contain that misleading paragraph quoted above and scratch it out.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2015-12-09 14:31
    Well, complex numbers are indeeded used to represent the vectors of a cycle over a cycle.

    But the problem here is more about FFT shifting to a different Domain, from a TIME domain to a FREQUENCY domain.

    The applitude and phase correlation I mentioned is all in the TIME domain and is an application of figuring out Power.

    You have a different application which leads to a need to understand what the compents of a Complex Number represent. I've not worked out what FFT actually choses to represent with Complex Numbers. I'll have to get back to this after I learn more.
  • Heater.Heater. Posts: 21,230
    edited 2015-12-09 15:40
    I was talking about the input signal in the time domain. No mention of the FFT and frequency domain output.

    For example:

    My input signal could be a simple sine wave that fits neatly into my FFT input buffer, say 1024 samples, starting at amplitude zero, ending at amplitude zero and containing one or more cycles.

    In that case, we can say that the real part is the amplitude and the imaginary part (all zeros) is something to do with the phase.

    But, what if I now put the same samples of the same sine wave to the imaginary parts of my input? Now my actual input signal has an amplitude greater than either sine wave, given by the length of the vector, sqrt(x^2 + y^2), and with a phase shift of 45 degrees respect my sample buffer.

    The amplitude and phase are encoded in both the real and imaginary parts.

    Now, about that business of the "real part being voltage and the imaginary part being current". I have to refresh my memory on this but I seem to recall calculating with complex impedances, complex voltages and complex currents. Say in a series LCR circuit. In that case the actual, measurable, current and voltage were in the imaginary part of the complex current and voltage values.

    Certainly if the real part were voltage and the imaginary part were current, then the imaginary part cannot also be phase as stated.

    Do you have an example of what you were talking about? It might help my confusion?

    Edit: OK skip the example: I just noticed you were talking about power. Still the imaginary cannot be both current and phase as stated.




  • ErNaErNa Posts: 1,738
    Heater. wrote: »
    Loopy,
    ...first part of a complex number represents Amplitude (Voltage), while the so-called 'imaginary number' part represents the phase shift (Current).
    Over the years I have read such statements many times. It's probably contributed more than anything else to slowing down my understanding of how the FFT works. It's basically wrong.
    As you say: this is clearly wrong. If your signal is described by one property, it can be represented by a sequence of "normal" numbers. If your signal has two porperties, you need two sequences which are not related to complex numbers. BUT you can represent pairs of real values as complex numbers WITHOUT asking, if this makes any sense.
    The idea seems to be that the real part contains, say, the samples of a signal as actually measured by you ADC or whatever. The amplitude. While the imaginary part some how magically encodes a phase shift. With respect to what we don't know.

    Now, in a common or garden FFT one often enters the measured values into the real part and sets the imaginary parts to zero. So it looks like the above statement is true.
    The simple fact is: FFT makes use of symmetries that are intrinsic to the harmonic functions sine and cosine. Only these properties allow to reduce computation time. To describe a sinus, you only draw the first half of a periode, and then say: the function is continued by changing the sign.
    And: the algorithm itself is symmetric in the sense, that it doesn't care about the meaning of the numbers you feed in.
    In reality the amplitude of our signal is represented by both the real and imaginary parts. If the real part is x and the imaginary part is y then:

    amplitude = sqrt(x² + y²)

    And the phase is represented by the angle of that vector [x, y] w.r.t the real axis. For example:

    phase = arctan (x/y)

    So we see both amplitude and phase are represented at the same time by both the real and imaginary parts. Both the real and imaginary parts are required. I would say that neither is more "real" or "imaginary" than the other.

    The case we had initially with the signal in the real part and the imaginary part set to zero is a special case of zero phase shift. It would be quite reasonable to enter a signal into an FFT as both real and imaginary parts.
    That is not 100% corrrect. Lets us start the other way around. You have a signal, decribed by harmonic components, that is: sine and cosine of a set of frequencies with given amplitude. The FFT has two input arrays which you feed with the amplitudes (which might be zero) and the frequency is just the index in the array. Now you calculate the FFT and you will get the graph of the function in the time domain. As the FFT returns the values in the same arrays, you necessarily will have two signals that are not in general zero. That is, you time signal is complex! But if one array contains only Zeros, you time signal is complex with imaginary part = 0. Something you normaly call "real" or "integer". But which frequency signals in sin and cos transform to real signals in time? The answer is simple: take a real signal in time, place the values in one array, fill the other array with zeros and do the FFT. Now you have a frequency signal that will transform to a real time function.
    But can you foresee, that a time signal will be real if you create the spectral function from scratch. The answer is YES. Just take care, that the spectral function is symmetrical, that means: the amplitude of sine frequency1 is equal to frequency n, f1 = fn-1, and so on. And for the cosine values change the sign of the amplitudes.
    As the guy from Hack RF says in his video, look at a signal as a helix, not just a sine wave. The amplitude is the radius of the helix, the phase is the angle you make from the axis to the helix.

    I would like to go back to all the books and articles I have ever read that contain that misleading paragraph quoted above and scratch it out.

    Again: complex numbers are just numbers, that can change in two parameters independently. And if you fix one part to zero, then you can not discriminate them from real numbers. And a simple example for a complex signal in time is the two channels of a stereo microphone. Or take two loudspeakers: if one sounds at 100 Hz and the other at 200, it is very unlikely, this is a stereo signal. But it both generate 100 Hz and 200 Hz at different amplitude and phase, this will be interpreted as a complex signal.
  • ErNaErNa Posts: 1,738
    Heater. wrote: »
    I was talking about the input signal in the time domain. No mention of the FFT and frequency domain output.

    For example:

    My input signal could be a simple sine wave that fits neatly into my FFT input buffer, say 1024 samples, starting at amplitude zero, ending at amplitude zero and containing one or more cycles.

    In that case, we can say that the real part is the amplitude and the imaginary part (all zeros) is something to do with the phase.

    But, what if I now put the same samples of the same sine wave to the imaginary parts of my input? Now my actual input signal has an amplitude greater than either sine wave, given by the length of the vector, sqrt(x^2 + y^2), and with a phase shift of 45 degrees respect my sample buffer.

    The amplitude and phase are encoded in both the real and imaginary parts.
    It is all about the concept.

    The concept of phase.
    if you have the signal you described, then this signal can be seen as a sine function over time. By definition a sinuidal signal, starting value 0 at position 0 is not phase shifted. So the signal in the buffer is not phase shifted. If now you create a copy in the second buffer, there also is no phase shift. Better: there is a phase shift of zero degrees.
    Now you are talking about a phase shift of 45°, but this is not a phase shift, but an angle. This angle only exists, the moment you graph your function in 3-dimensional space in the x-y plane and the copy in the x-z plane. Now for every point in x a third plane y-z exists and the two signals can be seen as the coordinates of a vector which varies, as you move this plane along the x-axis in amplitude, but will always have the same elongation in x and y, that is, will have an angle of 45°. But this is NO phase shift!

    You have to see: the 1024 points in your buffer give only a single point in the spectrum after FFT, is the original function was one period of sine.
    If now you copy the signal to the second buffer, after FFT you will have one point in the first buffer (that is sinus) and a second point in the second buffer, corresponding to cosine.

    So take care what you mean, when you talk about "phase shift"
  • Heater.Heater. Posts: 21,230
    ErNa,

    That is a lot you have written there. I have to read it carefully, when I'm a bit less tired, to see if you are agreeing or disagreeing :)

    But:

    Let's forget about the FFT. Wish I'd never mentioned it. I did not want to talk about transforming into the frequency domain. Yet... I'm only considering representing a signal with complex valued samples.

    I did not understand this at all "Now you are talking about a phase shift of 45°, but this is not a phase shift, but an angle.". Where I come from phase is an angle. Often expressed in radians or sometimes degrees. For example a sinusoidal signal can be expressed as:

    x(t) = A.cos(2πft + φ)

    Clearly 2πft is an angle increasing over time, t, at a rate proportional to f. While φ is a constant angular offset. The phase.

    I don't need to go to 3D space to draw this. I might want to go 3D for the complex representation.

    Let me think on the rest of your post later.
  • ErNaErNa Posts: 1,738
    Heater, I see we have to bring down all this stuff to the very basic. We must remove from the knowledge what we know implicitely. Just take for known what we write down. Then we will succeed.
  • Heater.Heater. Posts: 21,230
    Ok folks, you twisted my arm and I went and played with the complex inputs to an FFT.

    lo and behold a great sight was had!

    I don't really want to talk about the mechanics of the FFT but this does show an interesting point....

    Let's say we want to put a single frequency sine wave into such an FFT. We set the real parts according to our calculated sine and set the imaginary parts to zero. Let's use 16 samples to keep it simple. An example input might be:
    Input: real  imag
    -----------------
     0.0        -1.0
     1.0        -0.0
     0.0         1.0
    -1.0         0.0
    -0.0        -1.0
     1.0        -0.0
     0.0         1.0
    -1.0         0.0
    -0.0        -1.0
     1.0        -0.0
     0.0         1.0
    -1.0         0.0
    -0.0        -1.0
     1.0         0.0
     0.0         1.0
    -1.0         0.0
    

    And the output is the frequency showing up in a frequency bin:
    Output: real  imag
    ------------------
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0         -8.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          8.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
    

    But wait. That looks wrong. We have a -8 in the 4th frequency bin. That's good because it is was 4 cycles of sine we put in. BUT it's only half the correct value, should be 16. And look, the other half of it has ended up in the 12th frequency bin! Remember the second half of the output is not really valid as it represents frequencies greater than the nyquist frequency. So we have half our output power showing up as an alias.

    OK, remembering that a sinusoid should be seen as a helix in 3d space we can add the complex part to our input. This time using -cos instead of sin. Our input now looks like this:
    Input: real   imag
    -----------------
     0.0         -1.0
     1.0          0.0
     0.0          1.0
    -1.0          0.0
     0.0         -1.0
     1.0          0.0
     0.0          1.0
    -1.0          0.0
     0.0         -1.0
     1.0          0.0
     0.0          1.0
    -1.0          0.0
     0.0         -1.0
     1.0          0.0
     0.0          1.0
    -1.0          0.0
    

    And we get the output:
    Output: real  imag
    ------------------
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0        -16.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
     0.0          0.0
    

    Yay, that is the expected result! All the power of our input showing up as frequency 4. If we now divide that by the number of samples, as we should, we get the complex [0, 1] with absolute value of 1 which is of course the correct amplitude of the input signal.

    Now, I had seen this before in my FFT results. Never really thought about it. To get the correct results out of my FFT I even threw in a factor 2 to compensate. I never understood why I had to do that :)

    Of course, when we are working with a messy real world signal we don't have the imaginary part so we have to use zeros there and our result will always be half the power mirrored over the two halves of the output array.

    Nobody ever mentions that in the texts. I guess it's too obvious.

    Thank you Searith for prompting me to look at this anew.

    If you want to play with those number in an FFT there is an online FFT calculator here:
    http://scistatcalc.blogspot.fi/2013/12/fft-calculator.html
  • You're welcome. I'm sure this isn't the last we'll talk about it, though. :)

    And I *still* suggest that people read the book at dspguide.com. Imaginary numbers don't show up until something like Chapter 30 or 31, at which point you will have a solid understanding of DSP to help you out.

    Incidentally, your first output example demonstrates Odd Symmetry (the zeroth point is in the middle, not the end). Which is covered by the book. :)
  • If you look at quaternions they get even weirder, because they're 4 dimensional with 3 "imaginary components", i, j, and k. Part of the reason for doing it is apparently to indicate that the numbers exist on different planes, and to make certain aspects of the math work correctly. For example, multiplying normal numbers (A x B) is commutative, meaning that A x B is the same as B x A. With quaternions that's not the case, and apparently the imaginary components help enforce that.

    I kind of agree with Heater in that they really are just 2 (or N) dimensional quantities with certain rules that need to be enforced, but supposedly using the imaginary notation helps formalize that in a very specific way. I actually saw a very good write up on this about a month ago that described it all very well, and laid it out in a way that started small and built it up in layman's terms, but I can't find it now. If I can locate it I'll post it here.

    J
  • Actually, here's a decent summary of the imaginary bit:

    Multiplying a pair of imaginary numbers ends up like this:

    Say you have a pair of imaginary numbers, (a,b) and (c,d) that you want to multiply.

    (a + bi)(c + di) = (ac − bd)+(bc + ad)i

    Ignoring the imaginary bits, you get:

    (a, b)(c, d) = (ac − bd, bc + ad)

    ...because i*i = 1, and i = sqrt(-1)

    If you were to take a normal pair of coordinates and multiply them, the math is different.

    (a,b) x (c,d) just ends up being (ac , bd), and it IS commutative. With complex numbers it isn't.

    Not sure if that makes things clearer or not, but it helped me a bit.
  • Heater.Heater. Posts: 21,230
    @Seairth,

    The DSP guide is brilliant. Highly recommended. As far as I recall it is not insightful regarding complex numbers and the FFT itself.

    Please do elaborate on the "the zeroth point is in the middle, not the end" thing. As far as I can tell my output has the zero frequency (A continuous high on the input) in the zeroth entry of the output. As I would expect.

    @JasonDorie

    You pretty much have it nailed there. Complex numbers are just values composed of number pairs (a, b) with some odd rules for arithmetic with them. No need for i and e and cos and sin etc.

    But why do that?

    Because those simple rules end up giving your rotations in the 2D plane. Magic?

    Oh boy, let's not go to quaternions. Same but more so!
  • JasonDorie wrote: »
    (a,b) x (c,d) just ends up being (ac , bd), and it IS commutative. With complex numbers it isn't.

    How is complex number multiplication not commutative? Because you can't swap the real and imaginary parts?
  • Heater.Heater. Posts: 21,230
    A complex number might be X = [a, b]

    Another one might be Y = [c, d]

    So now, is X * Y the same as Y * X ?

    or is [a, b] * [c, d] = [c, d] * [a, b]

    That is to say, is complex number multiplication commutative?

    Well, yes it is, Jason is wrong.

    Try it for yourself with some simple examples.

    Or the proof is here: https://proofwiki.org/wiki/Complex_Multiplication_is_Commutative

    Notice how there is no "i" in that proof. Only the weird definition of complex multiply.
  • Ahh, yes - I'm incorrect. Multiplication of quaternions is non-commutative, but that doesn't apply to complex numbers as a whole.
  • Heater.Heater. Posts: 21,230
    Jason, yes, there is something very profound about that.

    Complex multiplication represents rotation in the 2D plane. If you rotate something by an angle A and then another angle B, (A * B), it ends up in the same position as if you made those moves in the opposite order, (B * A).

    Quaternion multiplication is a rotation in 3D space. Where, in general, the order in which making the rotations matters. A first then B does not get you to the same position as B first then A.

    This all just falls out from the simple rules for complex and quaternion arithmetic. Magic!


  • potatoheadpotatohead Posts: 10,253
    edited 2015-12-12 06:56
    In 3D, there should also be one, atomic rotation that will establish the same position and orientation as the sum of the ordered rotations do.

    Same thing for 2D.

  • Heater.Heater. Posts: 21,230
    Yes, that is what quaternion multiplication gives you.

    For the two rotations A and B the rotation C is the single rotation that is the same as rotating by A and then by B.

    C = A * B


  • The real deep power of complex numbers I think is in the way they replace all of trigonometry
    with exponentials - and exponentials are easier to manipulate (you don't need to memorize
    dozens of trigonometric identities, just do algebra)

    Complex numbers mean polynomials have a full set of roots.

    Complex numbers are great for representing both analog and digital filter transfer functions,
    and mapping between the two.

    I like to think of complex numbers as the basic type of number, with things like reals, integers
    being constrained, more limited/specialised versions of number.

    Guassian integers haven't been mentioned yet I think - they are complex numbers x+i y where
    a and y are constained to be integers. One application is elliptic curve cryptography.

    The thing to remember is all these kinds of number obey fully the rules of algebra, but
    complex numbers are more general. You can think of them as defined by the rules of algebra
    themselves - things like commutativity/associativity/distributivity.

    But yes, "imaginary" and "complex" are poor names. "General" or "phase-aware" or "planar"
    might all be better names for complex numbers. Certainly quantum mechanics tells us reality
    is all complex numbers in the inner workings, so "real" numbers are perhaps poorly named too.
Sign In or Register to comment.