Shop OBEX P1 Docs P2 Docs Learn Events
Propeller supercomputing - Page 4 — Parallax Forums

Propeller supercomputing

12467

Comments

  • bobr_bobr_ Posts: 17
    edited 2007-09-27 14:42
    for those reading/monitoring this thread
    I have freshened it up a bit with a new challenge.
    http://forums.parallax.com/showthread.php?p=678603

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    -bobR
  • IAN STROMEIAN STROME Posts: 49
    edited 2007-09-27 22:19
    Lucid guppy,
    Using one of Chucks brainchilds PSC1000 @ 110 Mhz gets a bit warm
    but works great if you keep the ice tray on it.

    Best Regards
    Ian

    :-Few persons invent algebra on their own.
  • RaymanRayman Posts: 14,364
    edited 2007-09-28 10:43
    What is all this? Supercomputers are measured in FLOPS, not MIPS because they are used for floating-point operations. I don't see any value in this integer supercomputer...
  • Graham StablerGraham Stabler Posts: 2,510
    edited 2007-09-28 10:46
    That is like saying there is no value in integer computation. And if you do want to use floating point high MIPS will equate to higher Flops than low MIPS

    Graham
  • RaymanRayman Posts: 14,364
    edited 2007-09-28 12:03
    Supercomputers are for floating point, not integer operations. I'm not saying a bunch of Propellers wouldn't be good for something. I'm just saying it wouldn't be a "supercomputer".
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-28 16:13
    There is no really accepted definition for "supercomputers", but to my feelings Rayman is right...
    I prepared the following graph some time ago, showing - without too much precision - the four "phyla" of computers having shown up during the last 60 years...

    If you believe in "numbers", than now is the time where a new kind of low cost (10 cents?) nano computer should show up smile.gif

    BTW: We should need short of a BILLION Props to come close to one of the "Top 10" (in terms of FLOPS).

    Post Edited (deSilva) : 9/28/2007 4:27:47 PM GMT
    570 x 370 - 43K
  • Graham StablerGraham Stabler Posts: 2,510
    edited 2007-09-28 16:29
    Actually what you said was that there is no value in an interger supercomputer. If you want to do a lot of interger computing there would seem to be value, you can call that what you want but it's pretty super to me.

    Graham
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-28 16:44
    Nice try Graham - but it won't work smile.gif We all know what newspapers are all about ...

    The term "computer" had a well defned meaning upto 1948. It was a person (most of them women BTW) sitting in a kind of open-plan office with some dozend colleagues performing calculations according to a fixed plan. The two most important groups of customers where optics industries, needing the exact cut for lenses, and the artillery....

    When someone used the term "Super Computer" at that time he was most likely refering to the fact, that this machine was more powerful that a human being.
  • Graham StablerGraham Stabler Posts: 2,510
    edited 2007-09-28 18:21
    To me saying floating point is the definition of a super computer is silly (could still be right), must a computer also be beige to be a super computer? What about code cracking, wouldn't integer math still be rather handy?

    Anyway I'm going to follow a lovely concept you introduced me to and go back to fighting windmills and leave this thread alone. I doubt anyone will actually build a propeller borg cube anyway [noparse]:)[/noparse]

    Graham
  • viskrviskr Posts: 34
    edited 2007-09-28 19:12
    Time for a bit of a reality check here, whether its measured in FLOPs or not, just make some rough calculations. The record holder these days seems to be about 300 TFlops, which is trillion floating point operations. A prop running at 80Mhz out of COG RAM a SPIN code is probably running about 300K SPIPs (SPIN inst per second).

    Next a prop does floating point in string like fashion so its kind of slow.

    Lets say you restrict yourself to integer calculations. There is no multiply so it is done by shift/add, so its not very fast.

    So just doing multiplies in SPIN its about a factor of 32M times slower.

    The prop is good for doing fast low level control, but a supercomputer, between speed and lack of memory, kind of doubtful. If memory is expanded serially its even slower.
  • MarkSMarkS Posts: 342
    edited 2007-09-28 22:45
    If you look at the human brain, you'll find that we do not have a few hundred (or thousand) 64-bit processors running in parallel, but rather trillions of very small and low power processors (I doubt more than 8-bit at best) linked in a massively parallel network and grouped together to control certain tasks. An OS isn't even needed, its the way the neurons are connected that makes certain systems work.

    What I'd like to see is a company come up with a processor like the Prop, but with dedicated inter-processor communication lines and protocols instead of I/O and/or address/data bus lines. Integrated circuit technology has advanced to the point that you could fit dozens of full 8-bit processors onto a single chip of silicon and have room for control circuitry. The potential to do massively parallel neural networks is there, but every time someone tries it, they use large scale processors with massive power requirements and heat dissipation issues. In order for us to ever see true AI, like in iRobot, for instance, we'll need to go backwards and scale down.

    The Prop is the first processor that comes close to this.
  • rjo_rjo_ Posts: 1,825
    edited 2007-09-28 23:27
    Mark,

    I have looked at the brain... from autopsy all the way up to complex visual recognition. Very few people are in a position to publicly offer an honest and correct opinion about how the brain works. For instance... the brain actively uses fields that are so weak that we have a real hard time trying to measure them. We would have to change field theory to scientifically account for what we already know about what the brain actually does. This is not just my opinion... this is what the experts in the field actually believe. The brain doesn't actually operate at EEG frequencies... cells commonly communicate at 1MHz and above. No-one is talking about these interactions very much, but they are known to exist. As a result of all of this, the raw computing power of the human brain has been grossly underestimated.

    The brain is not a simply a massively parallel network of slow 8 bit processors. DNA is involved in everything.... and it is a 16 bit system... if you look at each codon as a bit. But if you include complex field interactions, each codon might be more than a bit. Word length is considerably longer than four bits[noparse]:)[/noparse]

    A good neuroscientist in looking at the Prop would ask... "ok, where do we put the DNA?"

    That's where the large model guys and the pre-processor guys will come in handy.

    We will eventually have a Prop based vision system that outperforms the human brain in many ways... and it won't take a lot of Props to do it. We don't need a Prop II. Prop I is just fine. Not that we don't want all of the new goodies we should suspect will be in the Prop II... we do, we do.

    Everyone is just scraping the surface right now... the Prop is real deep.

    Every time I ask a question... ? "would it be possible to do the following .... nag, nag, nag?" I have gotten the answer: "YES." Frequently I get a technical explanation... "and this is how it would be done."

    Knowing what is possible and actually doing it are two different animals. BUT from what is possible, we haven't seen anything yet.


    Rich
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-29 00:00
    I wonder if anyone has looked at my nice chart? Or has even understood it smile.gif
  • rjo_rjo_ Posts: 1,825
    edited 2007-09-29 00:10
    Where?
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-29 00:21
    9 postings up smile.gif
  • rjo_rjo_ Posts: 1,825
    edited 2007-09-29 01:08
    OK

    I found the chart... and I always get mixed up... I think FLOP refers to floating point operations... but it could be flip flops.

    It wouldn't make any sense to me to make a chart about floating point operations, when flip flops are far more important (unless you sell math coprocessors).

    How many flip flops do you suppose the Prop has?

    Rich

    P.S. the only reason for floating point math is an error in our thinking.
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-29 07:19
    (a) Its FLOPS not FLOP. The "S" is not a plural but means "per second". What the "P" means is still unclear..there are two possibilities..

    (b) There is generally a strict relation between FLOPS and the internal complexity of a machine. When you look at enlarged images of a processor die you will notice two very large structures: one is the cache memory and the other the (SIMD-) Floting Point Unit.
    DSPs have the need for extremely fast data throughput, without using FP with the low cost models: so those are underrated when comparing by FLOPS

    (c) FLOPS is the metric best advertised, published, and chalenged from the times of the ENIAC. So in a way we measure whats easily available; but see (b) for an excuse.

    (d) FLOPS are the only "raison d'
  • MarkSMarkS Posts: 342
    edited 2007-09-29 11:29
    deSilva said...
    (a) Its FLOPS not FLOP. The "S" is not a plural but means "per second". What the "P" means is still unclear..there are two possibilities..

    FLoating point Operations Per Second
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-29 12:24
    Possibilty 2:
    FLoatingpoint OPerations per Second

    Even (Remote possibilty 3):
    FLOatingPoint operations per Second
  • rjo_rjo_ Posts: 1,825
    edited 2007-09-29 13:01
    I was a little tired and made a typo.

    This kind of graph has been around forever. My point was that it is somewhat irrelevent in describing architectures, whose functionality is not dependent upon floating point operations.

    For example, if we tried to measure the brain in terms of FLOPS... some people would be around one flop per minute.

    Why not talk about LOPS... logical operations per second? or Sigs... signal integrations per second... or FIGS ... functional integrations per gigasecond?

    It irritates me when people talk about SPIN in terms of "instructions per second"... and then leave out the effective power of those single instructions.

    Rich
  • rjo_rjo_ Posts: 1,825
    edited 2007-09-29 13:15
    How many FLOPS does a desktop Mac require ... before it can output a composite sync?
  • simonlsimonl Posts: 866
    edited 2007-09-29 14:03
    FLOPS, TFLOPS, Supercomputer, or supremicro - it's all semantics! What does it matter what the thread's called? I think the essence of the initial post was that the Prop lends itself to easily expanding to fit a solution requiring more computational power - right?

    For what it's (not?) worth - I'm with Rich; I'd say the brain is more powerful than the best supercomputer, when applied to real world tasks like understanding, learning, ·and making tea (!), and _it's_ not measured in anything per second. Heck, it's not even digital.

    If anyone's interested, I'd strongly suggest they read Steve Grand's books -especially "Growing up with Lucy"...

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Cheers,

    Simon
    www.norfolkhelicopterclub.co.uk
    You'll always have as many take-offs as landings, the trick is to be sure you can take-off again ;-)
    BTW: I type as I'm thinking, so please don't take any offense at my writing style smile.gif
  • rjo_rjo_ Posts: 1,825
    edited 2007-09-29 14:40
    Simoni,

    Thank you very much.

    In terms of analogies, I think it is helpful to consider DNA as almost purely digital... after that things look very analog. There are even tubes involved... itty bitty tubes, no joke. Then when you get to the highest level of performance (complex decisions) things starts to look digital again.

    People like to argue over bench marks. The Prop is so new and so sufficiently different that we need new bench marks if we wan't to fairly compare it to other less functional architectures.

    In economics there is something called "fair value." "Fair value" is great for comparing available assets for present uses. "Future value" is far more complex, and it is future value that drives long term investments and planning.

    Engineers have the job of evaluating and recommending competing architectures. Many times their decisions are based on fair value... the Prop is tough to evaluate because of the complexities of current business and the percieved cost of development.

    In terms of "future value," the Prop is a hands down winner... but it takes a fair sized company to make "future value" kinds of decisions.

    For whatever reason, the forum tends to give the impression that you have to be a genius to develop on the Prop or you have to accept long lead times to convert over to the Prop.

    And it just isn't true.



    Rich
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-29 15:25
    I have still the want to make my point clear; and it may help to do it in some systematic way: a,b,c,d,... smile.gif

    (a) There is no such thing as witchcraft in chip design. Given a certain amount of silicon and some production constraints (nearly equal all over the world) you will come out with a highly predictable and comparable amount of "power" (see AMD/Intel). This "raw-power" however can be spend for a wide range of tasks: DSP, number crunching, pattern matching, logical inferences,.... An architecture "tuned" to one oft those fields will be most likely flop in another.

    (b) I shall not comment on the Propeller as such: I see its nice features but also its grave shortcomings. It is obviously neither tuned for DSP nor for FP.

    (c) Measuring (or "benchmarking") a machine is one of the great sports of the industry. But all those tiny differences end in smoke. That's one reason I posted the curves. They are not really smooth, but looking from 60 year's history they seem so...

    (d) As I said earlier it makes not much difference what units of measurement you use. FLOPS - because of their ubiquitous availability - are perfect for the reasons I gave.

    (e) The other reason I posted the chart was to remind you of the orders of magnitude we are talking about. The logarithmic y-axis has probably obscured the basic facts, so I shall repeat: An ASCI, Earth Computer or Blue Gene is about a billion times more powerfull than a Prop (but costs only 10 million times more). It is not just the mouse and the elephant.

    (f) A highly interesting approach has been has been prsented by the Tile64
    www.tilera.com/products/processors.php Alas, too expensive to play with...
    They are quite sucessful in addressing the commucation bottleneck.
    diagram_tile64.jpg

    Post Edited (deSilva) : 9/29/2007 3:30:46 PM GMT
  • MarkSMarkS Posts: 342
    edited 2007-09-30 02:51
    deSilva said...
    Possibilty 2:
    FLoatingpoint OPerations per Second

    Even (Remote possibilty 3):
    FLOatingPoint operations per Second

    In English, the "per" is always part of the abbreviation. For instance, miles per hour (MPH), kilometers per hour (KPH). In this instance, the "point" in floating point is assumed. Technically, it *should* be FPOPS, but I digress...
  • toru173toru173 Posts: 17
    edited 2007-09-30 11:57
    I always used to think it was FLoating OPerationS per second - but that's just me

    Also, check out the Transputer. The prop has always reminded me of that wonderful chip
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-30 12:43
    toru173 said...
    The prop has always reminded me of that wonderful chip
    The similarity is skin deep, I should say smile.gif
  • Bob Lawrence (VE1RLL)Bob Lawrence (VE1RLL) Posts: 1,720
    edited 2007-09-30 15:11
    From Wikipedia, the free encyclopedia
    • Learn more about using Wikipedia for research •
    Jump to: navigation, search
    For other uses, see Flop.

    In computing, FLOPS (or flops) is an acronym meaning FLoating point Operations Per Second. The FLOPS is a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations; similar to instructions per second. Since the final S stands for "second", conservative speakers consider "FLOPS" as both the singular and plural of the term, although the singular "FLOP" is frequently encountered. Alternatively, the singular FLOP (or flop) is used as an abbreviation for "FLoating-point OPeration", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate.

    Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than FLOPS.
    The standard SI prefixes can be used for this purpose, resulting in such units as teraFLOPS (1
  • deSilvadeSilva Posts: 2,967
    edited 2007-09-30 15:23
    (a) It should read 10^12 for Tera

    (b) Of course there are programmable calculators; but 10 FLOPS comes indeed close to the truth, maybe 1000.

    (c) I once ran 6 COGs with FLOAT32 giving around 1 MFLOPS, this is quite good. I think I mentioned it somewhere, perhaps in comparison to the uFPU which gives you a little bit less.

    (d) As seen in my posted charts above, Konrad Zuse's Z4 could deliver 1 (running @ 37 Hz, relais could not tick faster..) and the Eckert&Mauchly's ENIAC around 300 (both 60 years ago)
Sign In or Register to comment.