Shop OBEX P1 Docs P2 Docs Learn Events
Propeller supercomputing - Page 6 — Parallax Forums

Propeller supercomputing

12346

Comments

  • Graham StablerGraham Stabler Posts: 2,510
    edited 2007-10-19 15:42
    One problem I have had through the years is a rejection of knowledge if I could not fully integrate it, I didn't realize that sometimes you havr to go with the flow a bit to let things sink in and then TRULY understand it. Having teachers that don't really understand what they are teaching does not help much either.

    As an example if f(x) = e^x then df/dx = e^x

    That I was taught but what followed was a million examples like f(x) = e^2x df/dx = 2e^2x

    We went through pages of these exercises until we could always get the right answer. But at the end non of us realized that what we had really done is program our brains with the equation:

    df/dx(e^g(x)) = dg/dx.e^g(x)

    Sorry for the maths and I hope I got it right [noparse]:)[/noparse]
  • potatoheadpotatohead Posts: 10,261
    edited 2007-10-19 15:58
    I've got the same problem.

    For me, it's literally got to mean something and be applicable somehow. A table of equations really helps form sort of a pattern recognition path. It's like a regex for the brain, linked to a case statement. Brutal, but it works. How well it works, depends on the kind of people it's applied to and how they learn. (what kinds of paths they depend on)

    Take an equation and apply it to some real world behavior, then it's golden! Never forgotten, easily applied.

    Tables of stuff, unless very simple and often used, are just tables of stuff. The rules that built the table are what I really learn well, not specific evaluations of those rules. A lot of my peers like me, are the same way.

    In that example, perhaps linking that equation to concepts, common among people, is too difficult. So, the teaching problem escalates, requiring brute force instead of reasoned linking and application. IMHO, I've always thought pure repetition is the method of last resort. If you repeat something, maybe with variations, a ton of times, you literally carve out a path and make links slowly because the relationships required for said links are just too much for a quick and easy association to take root. It's like pounding a nail through hardwood. The brain gets changed, just a little, right then and there. Ideally, you are better off for it!

    History classes were always tough too. If it's a bunch of significant names & dates, forget it. I'm gone, checked out. Tell me a story, let me build a model of what that time was like, who the people did and why they did what they did and I'll remember and use all of that. --still will forget the dates though...

    Another one:

    I often will recall where I saw a specific reference. Can tell you where it's at on the page, the font and formatting, if it was near a picture, etc... but need to actually go look at it. Would be a lot easier to just remember it, and forget all the meta-information about it. Annoying as all get out.

    Edit: Always enjoyed teaching / learning environment. (not so much being the full time teacher.) My career path and skill set didn't take me down that path, early enough to warrant the effort. Went off to manufacturing and engineering instead. Been able to get part of the way there however.

    Today, I do teach adults often. The area I work in is engineering data management and process, also mechanical CAD. Often it's training or consulting.

    Adults often learn a lot differently than kids do! I can show adults some CAD stuff, they can repeat it, and retain it for a while. Or, one can show them the CAD stuff, and apply it to some real world stuff they are doing, for context. They retain it much easier and in the longer term. Process seems to be a big link. The average adult, if they grok the process, will easily go digging for some little detail, maybe forgotten, in order to complete it. If that process is muddy, that same detail becomes an obstacle that's a problem and source of frustration instead.

    With kids, they will accept some things and build on them. Most adults need to incorporate new things into the body of things they have already accepted, if they are to be learned and useful. Not sure when this change occurs, but it is real. It's also not a set thing. If one can get adults in a playful mood, the learning will revert back to more kid like modes, means and methods. (humor, story telling, and role-playing are good devices for sparking this)

    Going back to the nail pounding example, our brains seem to become more rigid when we play less. Play more, learn more. IMHO, that's literally true!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Propeller Wiki: Share the coolness!

    Post Edited (potatohead) : 10/19/2007 6:58:27 PM GMT
  • bambinobambino Posts: 789
    edited 2007-10-20 06:28
    Potatohead, That play, then work, thing goes thru my head a lot, because as a school bus driver it's taught every month at safety meetings.
    When something becomes second nature, it becomes prone to human error. Example: Given a new route to follow I read the directions, I already know the streets there talking about, so it's a challenge. I dare say that first time following the route there's an element of childs play. But as the days go by I must warn myself of the dangers of driving, because the route starts to look like the back of my hand. The thrill of something new is gone and a lazy attention to detail takes it's place. I try to curb that fimiluarity with a grain of fear, but I still make a wrong turn occasionally on a route I have done for many months!
    This does not fit your scenario exact, but may explain where the play leaves us.

    Another example is a saying I have heard many people say: "I loved doing this as a hobby, but when I started getting paid to do it, It just isn't fun anymore!"
  • Paul BakerPaul Baker Posts: 6,351
    edited 2007-10-23 15:17
    Ken, autism has recently been shown to be a product of environment. They discovered this by studying a few genetic twins where one twin developed autism yet the other didn't. Autism it turns out is an epigenetic disease, or a disease in the expression of DNA rather than the DNA itself. This is caused by methyl goups binding at CpG sites on DNA effectively turning that gene off, this is the mechanism the body uses to turn stem cells into a delinated cell, such as a skelatal, muscle or nervous cells, but the process doesn't stop there. After birth, our experiences both physical and emotional continue to shape the epigenetic process and effectively shape us into the person we become on a very physical level, this is the nurture part of the equation and in it's entirety has a more profound effect on our body than the DNA we have. Over 99% of our DNA is identical to every other person's walking the face of the earth, but if you look at the epigenetic expression of a person's DNA (which genes are currently "turned on") you get a unique fingerprint, even in identical twins.

    Sorry for the OT'ness, too cool of a subject not to answer Ken's question to _rjo

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Paul Baker
    Propeller Applications Engineer

    Parallax, Inc.
  • potatoheadpotatohead Posts: 10,261
    edited 2007-10-23 16:33
    bambino, totally.

    A very similar thing is emphasized in manufacturing. (well, at least the better manufacturing groups) Go on auto-pilot, lose a finger, or something... Not good. Nobody wants to be a member of the 9 club.

    One of the first places I worked, we took time to play every so often. Was something simple, jokes, gags, maybe something fun, like shooting things at a target with contrived compressed air accelerators, whatever. All of us learned a ton in that environment, we just had to keep it quiet! Numbers were good, so we rarely saw any trouble from higher ups, only focused on the work element.

    I was attending some college at the time (liberal arts), working in the evenings. Most of the people I worked with had little to no secondary education, but all of them were very sharp people, full of all kinds of information, processes, etc... Having compared other environments since, I believe to this day, that play ethic had a very positive impact on their general ability to learn, retain and use information. Lots of hobbies too. Writing, singing, building, etc... There was very little difference, in terms of overall depth, between these people and those I met with strong formal educations.

    Another attribute was passion. Loving what you do has a serious impact on it. These guys conveyed to me a sense of value to the work efforts. When we created complex things, on time, and looking top notch, that's worth the sweat. (the pay sure wasn't) That forced another decision as well. If I don't feel any passion, and that means taking some time to explore to see if it can't be teased out, then it's not worth doing, no matter the pay. The play ethic has served me well. Can't say the passion one has (in terms of dollars and corporate accomplishment), but I'm never unhappy while working!

    Paul, that is an extremely interesting bit of information you've posted. Thanks! My wife and I talked about something like this, when the kids were young. Both of us grew up in rural environments. Lots of outdoors, work, play, quiet time, etc... We are city dwellers (for now), and were concerned about the kids adaptations growing up. Her parents had a nice place, off the beaten path, near the beach. We spent as much time there as we could, with the kids outside doing stuff (dad too). Building fires, lean-to's, hide 'n seek, animal watching (sneaking around, laying quiet hoping to catch a glimpse of cool animals), etc... Tech was downplayed during that time. No subscription TV, limited Internet, etc...

    Our kids are different than many city kids are as a result. More robust and tolerant of less than ideal conditions. Less focused on beating the joneses too. One curious thing, that's directly related to your post, is their overall health. Being exposed to a lot of varied environments, they just don't get all that sick very often at all. IMHO, this is the nurture / physical thing in play too. I like the idea of DNA expression being malleable. This means we've all got as much potential as we have the will to exploit. Good stuff. Sometimes I look at all the genetic studies and wonder if we couldn't all be bagged and tagged as this kind of being, or that, and that's the end of it... Of course it also means respecting our environment is a very real thing.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Propeller Wiki: Share the coolness!
  • bambinobambino Posts: 789
    edited 2007-10-23 21:32
    No what you mean there Potatohead! I am a member of the 9 and 11/16 ths Club because a trainee asked me what does this do.
    It would not have upset me as bad had I not asked him to keep his hands in his pocket before I started the lesson! Some peoples train of thought can not be interrupted by a simple "Keep your hands in your pocket" when their mode of learning is hands on!
  • bunnibunni Posts: 38
    edited 2007-10-24 08:15
    I havnt gone through all the posts, I don't feel like reading 170 some entries, but has anyone yet tried to accomplish this, and if so, anyone succede in building and array like that, say 8x8?

    -Kris
  • deSilvadeSilva Posts: 2,967
    edited 2007-10-24 08:27
    Excellent question!
    I have "stacked" 4 Propellers some months ago, with little success so far (and nothing more done since then).
    I think I have a very good idea of what is needed to accomplish before you can do anything useful with systematically inter-interlinked Propellers (which is called "the general multiprocessor problem").

    There is literally no software or hardware support for that to my knowledge... Everybody has to start at the beginning.
    "Arrays" or "Grids" will make things even more complex.
  • Fred HawkinsFred Hawkins Posts: 997
    edited 2007-11-03 18:44
    alternatively, 8 ps3's and sundry hardware plus linux: http://blogs.zdnet.com/storage/?p=220&tag=nl.e540

    Interesting description of ps3 internals. Maybe we're a beginner's class here.

    Be sure to look at the link to the MIT notes: http://cag.csail.mit.edu/ps3/lectures.shtml·Notice their schedule at the end of the Introduction.· Reminds of how they teach Japanese (here's your syllabaries katakane, hiragana, then tomorrow...

    Post Edited (Fred Hawkins) : 11/3/2007 6:55:34 PM GMT
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-03 19:02
    @Fred:
    (a) Yes we are beginners insofar, as we have not yet passed the "physical layer issue", which is the most trivial of all
    (b) No, as this is just a different technology level: having a different price AND a different price/peformance ratio - have a look at my "computer phyla chart"
    said...
    Each SPE has 256 KB local store, a memory controller and a “synergistic processing unit” (SPU) with a Single Instruction, Multiple Data processing unit and 128 registers of 128 bits each. They’re connected by a bus with an internal bandwidth of more than 300 GB/s that transfers data between the SPEs.

    The bottom line: you can go to Toys-R-Us and toss 200 GFlops into your shopping cart.

    A COG has - well known - 1/128 local store, 1/2 word length, neither fixed point multiplication nor a touch of Floating Point Support. The inter COG bandwith is best case 4 Bytes/16 Ticks = 20 MB/s = 1/15000

    So it's some-hundred times more powerful and costs some-dozen times more - so what?
  • Fred HawkinsFred Hawkins Posts: 997
    edited 2007-11-03 20:21
    So what? This: we may not have the same specifications or capabilities but we (Propeller and PS3 users) are both using parallel processors. I, for one, found the MIT lecture notes a wonderful steping away from this forum's helter skelter to an organized and step wise approach to common topics. I think that their pdf's are worth a least a quick glace by those here who may lack a suitable foundation to understand parallelism. (Personally, I saved them to a folder. But then again, I save lots of stuff all the time) But it's worth mentioning, I think.
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-03 20:30
    Oh that, of course.. I couldn't agree more
    But I gave up talking seriously about massive parallel computing ( which is the only INTERESTING kind of parallel computing) in this forum already some time ago smile.gif
  • potatoheadpotatohead Posts: 10,261
    edited 2007-11-03 21:53
    IMHO, the middle ground is also interesting.

    We've a lot of software out there that depends on fairly high speed linear computing. Currently we are in a window where linear speed boosts are less frequent, and of dubious merit for a whole lot of tasks.

    Prop II, and to a degree the current Prop, will put more tasks into the parallel arena. The C complier will mitigate this somewhat, but still there are lots of little challenges where nice gains can be had with different approaches. Part of why I like this chip so much.

    Call it somewhat parallel computing. (nice link Fred, BTW)

    My day job involves MCAD. (mechanical CAD) This stuff really puts the pressure on linear computing. The current speed problem has forced a lot of compute management and has restricted what modeling techniques are applicable too. A subset of this is GFX, which is seeing some solid gains from parallel computing, and that's clearly where Sony chose to focus.

    Strictly parametric modeling systems (which means just about every modeler out there right now), are powerful, but also complex. Variational ones (of which there are only a coupla serious modelers, with one being scuttled), offer the potential for very different workflows, given some parallel support in both hardware and software. Two such examples:

    On an assembly model, each sub part model could be processed on it's own CPU. Combine this with a feature relations map, created as things are changed, and suddenly one could manipulate very large assembly models on clustered devices, or smaller ones in near real time --both options given enough RAM to handle the larger pool of simultaneous equations that are gonna come into play.

    Multi-user, shared computing resources. NUMA style machines are those where there is one OS image running and CPUs are connected by high speed internal buses and have their own local high speed RAM. Processes are distributed among these as needed, based on user demand. Better parallelism would mean far more effective use of these kinds of machines for either case of more users, or smaller numbers of users, but with more complex problems.

    The big problem with massive clusters is their interconnects. They are generally too slow for a lot of problem classes. On the other hand, they are really cheap, compared to a NUMA style machine. This has kind of fragmented code efforts.

    I suspect many tasks could be better addressed with an increased level of parallelism. Until recently, regular linear gains have pushed this to the side lines. Enter the middle ground where things are just more parallel, where it makes sense. This is interesting because it can bring some of the benefits from clusters and NUMA to the multi-core machines we are seeing be produced today.

    For a while (how long I don't know), it will be less expensive to produce multi-core / CPU machines than it will be to produce really fast, as in order of magnitude faster, linear ones. Done right, there are also power considerations as those cores could be allocated / activated dynamically, or at least throttled, depending on their use. Linear machines do this, but it's more of an all or nothing affair that is too coarse.

    We've pushed linear techniques very far. I don't think the same efforts have been applied to multi-processing, so in a way, software is actually behind again because hardware is forcing a change.

    The kinds of things the Prop does well, and how it does them, plays into this whole dynamic nicely. The design couldn't be more relevant, particularly on the power management and robustness side. Say one has a higher speed micro. Maybe it's got more RAM, stacks, interrups, the usual goodies. The software effort on it is fairly complex. It's complex on the Prop too.

    However, that complexity on the Prop is because a lot of multi-processor learning has gotta happen along with some different workflows.

    I believe we will see Propeller understanding and thinking yield really significant gains, where that same effort on Linear devices is more of an incremental thing. I also think we will see a higher degree of robustness as most problems end up being timing problems, and in a deterministic environment, once solved will present very little variation and thus failure.

    One other interesting element here is the power consumption one. Say you've got a linear device idling at a low frequency for power reasons. Now, it's difficult to get real time stuff to happen because the interaction loop is too long. On a Prop doing the same thing, real time stuff will be easier because of how the COG's work. One can be watching input, another can be computing, making decisions, another can be doing I/O, etc...

    Power management is also very different too. On the Linear side, hardware tends to be very robust. It has to be in order to smooth out a lot of things made tough by the nature of the CPU. On a device like the Prop, software does a whole lot more, thus keeping hardware complexity lower. Power management comes with the ride and does not depend so much on lots of complex code and shared libs and complex hardware to happen. Anyone can come up with their best case power consumption scheme on a Prop, without significant hardware investment, by comparison. It's in the box baby!

    Given the timing problems are solved in tandem with properly distributing the problem, a Prop in this scenario will realize far greater performance and reliability at a lower cost than a linear device will, particularly one coupled with multiple add on devices to do in hardware what one would just do with a few COGs on the Prop.

    Just watch as a whole lot of vendors begin to grok what Chip did. Their efforts will look more like ours are, not the other way around.

    Computing is pervasive now and getting more so. Look at PDAs and such. If those end up being more parallel devices, and the code that runs on them ends up being the same, computing on them begins to look a lot different. Running ones PDA for a week on a battery charge could be a function of some aggressive software instead of huge and energy intensive investments in battery technology, by way of one example.

    So, parallel / multi-processing in general terms is interesting now. Probably gonna be more so in the mid-term future, if for no other reason than for pure energy consumption, and portability reasons.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Propeller Wiki: Share the coolness!
  • AleAle Posts: 2,363
    edited 2007-11-03 22:03
    deSilva:

    Don't give up so early. As with every new "concept/application" it needs time to grow on people. Parallelism is actually a quit difficult concept to implement when we ourselves tend to do things in a sequential fashion.
    Consider for instance the problem we were discussing the other day about a C64 emulator. The only way to solve it is with parallel tasks. Treating each core as a "normal" processor makes solving the problems impossible. They are not normal processors. But you know that already. Task division is the way to go smile.gif. Message passing and similar algorithms are the way to go.

    Some time ago I way tempted to build a modular robot. Made of cubes or similar shapes, with exactly same functionality. They work in groups, and perform parallel tasks.
    Constructive difficulties (lack of exactly same kind of piece) made me postpone it for a while, the construction.
    http://unit.aist.go.jp/is/dsysd/mtran3/
    I thought was a great idea, communication among autonomous units, all working together. May be "too" autonomous for what you were discussing.

    Gru
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-03 22:50
    Well you tricked me into 6 statements smile.gif
    (1) The propeller is something you can experiment with, finding out whether general statements based mostly on ideology are founded or not, on a very low cost basis. This is what I find highly interesting.
    (2) However I feel myself crippled with a host of restrictions... So I was not yet as successful as I thought wrt item (1)
    (3) My experiences with real-time applications have taught me not to distinguish between "real" and "virtual" parallism. I have no problem in managing 200 "real-time" threads in one procesor, so I would still prefer a 640 MHz CPU over 8 x 80 MHz COGs.
    (4) It is true that "linear computing" has come to an end now; some of the very instructive Cell-Engine lessons found by Fred explain that clearly again. But true only for "High End Computing"....
    (5) What I should rather like to experiment with would be 64 or better 1024 processing elements. It is not a task to connect 8 or 128 Propellers, but then the "management problems" will start (resolved already many years ago with the Transputer). There is no "intermediate level" support for interconnected Propellers, and I do not expect this coming from Parallax...
    (6) 128 propellers will be $1000+ They will have only a fraction of the raw power of a PS3....
  • potatoheadpotatohead Posts: 10,261
    edited 2007-11-03 23:16
    #1 Totally.

    #2 IMHO, debatable. Depends on computing scope, more than anything.

    #3 What if power consumption and hardware cost are primary factors? (this is where I find the Prop very interesting, with V2 really scaling to a sweet spot)

    #4 I don't know that the whole, only for high end applications bit really does apply. Ordinary basic computing applications are growing complex. Expectations for what is high-end are falling rapidly. Bet we see some change here.

    #5 Yep. Agreed.

    #6 Agreed too, but scope again does apply. There is lots of power in a PS3. Lots of complexity too. It's very nice to have a device where it can be dedicated easily, and with little in the way of externals. Also quite interesting in the tradeoffs.

    If I have that $1000 to spend on a nice cell system, I've got one box that can run some seriously powerful code. Damn cool. However, that same $1000 spent on a variety of hardware, chosen to work with the Prop, now I've got access to a lot of tools, some software, some hardware. It's a tough choice!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Propeller Wiki: Share the coolness!
  • JoJo Posts: 55
    edited 2007-11-03 23:46
    My day job is writing EDA (Electronic Design Automation) tools for high end designs (http://www.clkda.com if you are curious) and I can tell you for sure that scaling just from clock speeds and faster instructions is definitely over and gone. Once processors reached 4 - 8 GHz clock speeds the heat dissipation issues became a major issue, over and above the normal issues one runs into when dealing with a clock frequency that only allows about 10-14 gate delays between flops. All the new high end CPUs are multicore. Be it Intel, AMD, IBM, etc. Everyonel is exploring the same directions. And the HotChips conference also was primarily focused on parallelism, multicore and related facets. The future *is* parallel/threaded/multicore smile.gif
    The trickier part is figuring out how to program those monsters.

    Anyway, getting back on track, don't forget what the Propeller is and was designed to be: a microcontroller. Designed to not need many external components, simple instruction set, predictable instruction timing, lots of general purpose IO pins. Use it the way it is intended.

    It is cheap enough and easy enough that I can let my kids play and explore things with it without being too concerned about them breaking it. Not the cheapest microcontroller out there, sure, but the only one I know that can drive a monitor/TV directly and that interface is absolutely necessary to get my kids to think of it as a "real" computer ! And it is fun to play with, which is the part I like.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    ---
    Jo
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-03 23:59
    There are some possible misunderstandings that come to my mind:

    (A) Parallel what? There is parallel activity within each processor (pipeline), within each microprocessor (I/O units as UART, CAN, Times/Counter,...), on each board (southbridge, GPU,...)
    But we generally talk about "symmetric" parallel computing. Nearly all applications I encountered in this forum use the true parallel and symmetric ressources as "programmable I/O". So are we talking about that "bit banging flexibility" rather than true parallel computing?

    (B) From its beginning the computer was a serial unit, highly hierachically organized and allowing only one or very few signal activities at a time. We try to access memory as hard as we can, but only a tiny fraction of it is used within a second, which is close to eternity to a CPU.
    As I said in another thread, a computer can start with 4000 switching elements, so parallel or not made no difference in that early days... Today nearly all of a (serial) computer is unused for nearly all the time. That is why modern power management (switching off idling functional units) on a CPU is so sucessful.

    (C) When I was talking of massive parallel computing, my idea was something like this: Today we have memory. Even persons working with the propeller are generally not aware how much ofteh memory they use. Ask them! 10%, 90% who cares! They become aware when something overflows. 99% of memory trouble on the Propeller has their root in a missing separate video memory...
    But I loose my thread... I think parallel computing will not rule as long as we have to care that our processor resources do not overflow smile.gif

    Post Edited (deSilva) : 11/4/2007 12:09:01 AM GMT
  • Ken PetersonKen Peterson Posts: 806
    edited 2007-11-04 00:00
    The value in the Propeller that I see is this:

    1. The hobbyist can work with a multiprocessing system (this is really cool). Computer science in general needs to emphasize the parallel approach because this is the future.
    2. The Propeller isn't necessarily the least expensive uC, but the demo board is dirt cheap and the Propeller tool is free
    3. The chip has been designed to be as flexible as possible, rather than having a bunch of dedicated hardware
    4. The Parallax forum provides a wealth of assistance for those dipping their toes in the lake of the Propeller!

    The realm of supercomputing seems to be dominated by parallelization rather than clock speed. I feel the Propeller provides a valuable learning tool and a useful experimentation platform to that end.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


    The more I know, the more I know I don't know.· Is this what they call Wisdom?
  • rjo_rjo_ Posts: 1,825
    edited 2007-11-04 00:53
    Fred(et al),

    Many thanks for the link... good for the whole family.

    I haven't read it all, but I did look at the schedule and then I noticed that the final grade didn't depend much upon actually understanding what was being said but being on the right team, which is able to hack together a project in one month. This kind of structure encourages cheating.

    I found the dogmatic proclaymation(sic[noparse]:)[/noparse] about programmers not needing to understand processors really interesting... where do ideas like this come from?
    When one to this conclusion one basically determines how an architecture won't be used (of course you don't really know it at the time... and you wouldn't know it later unless someone told you[noparse]:)[/noparse]


    So, if the programmer doesn't really need to understand the processor... then all he really needs to understand are the abstractions, right? And which abstractions might those be? And if there is a problem with the abstraction... and that abstraction becomes the holy grail of ideas, then where are we?

    I was looking at the PS3 and the Propeller at the same time... and the determining factor was the depth of the available information... and the fact that the processor was actually approachable without using anyone else's abstractions, trying to become a master of an unstable operating system or learning a new language, all of which would have a usable half life of about 2 years before something changed significantly.

    If one drops all of the abstractions and comes up with a novel problem that requires a parallel approach, I am confident that implementing a massive propeller system is probably as easy or easier than any other platform out there. On the other hand, if one sticks to problems that have been solved in the past... then I am sure that there are other platforms that might be easier to approached since canned applications are already available and simply require fitting to the abstracted architecture.

    In many areas, productivity will be higher with the "PS3" approach, but this is not necessarily true of all application areas.

    Rich
  • rjo_rjo_ Posts: 1,825
    edited 2007-11-04 00:57
    deSilva,

    You know that I am a big fan of yours and that I also like to needle you a little... did you really say that you stacked 4 Props together and couldn't get them to do anything?

    I don't believe it.

    First, you need to program it... store everything to the eeproms and then hit all of the reset buttons at the same time[noparse]:)[/noparse]

    But seriously... you actually tried to do something that didn't work?

    What?



    Rich
  • rjo_rjo_ Posts: 1,825
    edited 2007-11-04 01:18
    deSilva et al,

    The Propeller is a new kind of controller, we need a new kind of benchmark. So, please put this through your main processor and see what comes out; I'm a little stumped.

    I have eactly one half of two good idea... here they is(sic) ... in two parts:

    part A... everybody leaves the counters out of the tabulations... so even when we talk about a number like 20 MIPS, we are really leaving out a major lump of processing capability... If you replace a counter mode with machine code... you end up needing a certain number of machine ops to do the same thing that the counters can do (just by setting them up right and telling them to start). Let's say that to produce exactly the same functionality in code that you can produce with a counter mode you need at a minimum of 4 lines of assembly code... that to me means that each counter adds a certain number of MIP equivalents... but nobody talks about it.

    The Propeller is being systematically and seriously short sold by such errors of omission.

    part B... the Propeller is a controller... control involves making sensible changes in the environment or sensing measureable in the environment...
    But nobody benchmarks the number of "sensible opeations (SO)," an architecture is capable of ... I am not an academic and this seems to me to be an academic issue.
    If we could get some support for the notion, then the answer to FLOPS would be SO?


    Thanks,

    Rich
  • rjo_rjo_ Posts: 1,825
    edited 2007-11-04 01:25
    I don't know how to post edit... please disregard the typos.
  • Fred HawkinsFred Hawkins Posts: 997
    edited 2007-11-04 04:58
    rich, on your own posts you will see a pencil. Click on it if what you wrote sucks.

    Today, I am coming to the conclusion that the 'breakout' prop app will be a mass produced usb box that puts a bunch of pins (say 24) at the control of Windows computers. The pins will be used to drive a handful of electro-mechanical modules. Properly thought through, who wouldn't love to have their own plotter or cnc or 32 servos? (Or keep it simple stupid -- that is, offer some tools and let people decide what to do.)
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-04 08:53
    Joao Geada said...
    ...Anyway, getting back on track, don't forget what the Propeller is and was designed to be: a microcontroller. Designed to not need many external components, simple instruction set, predictable instruction timing, lots of general purpose IO pins. Use it the way it is intended.
    True words! We have four "phyla" of computers at the moment. The youngest (well 15 yrs old now) was never meant to be a building stone for supercomputers but to substitute too complex electronics circuits in a cheap, robust , all-inclusive package.
    Does someone say: "But microprocessors started with the same idea, now 30 years ago! Look what they have come to!"
    Exactly - and well described in the slides of the quoted lectures: They are monsters now, but perfectly suited as building stones for high-end parallel computing.

    The Propeller lets you do things you would normally need two or three chips for, to utilize their explicite parallel working. This is - as I already said - exactly how we use the COGs at the moment, as intelligent bit-banging I/O units. Very convenient, no doubt.

    Post Edited (deSilva) : 11/4/2007 11:17:29 AM GMT
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-04 09:21
    rjo_ said...
    The Propeller is a new kind of controller, we need a new kind of benchmark.
    Rich,
    (a) It only appeares so to people coming from electronics. The same could have been said of the Intel 8080: "This is a new kind of controller..." But computers existed already for 30 years at that time, in every university and in most companies. Obviously FLOPS would have been "unfair" to them. But that only showed that they were unable to substitute those "number crunchers". This would be still the case, had they not "incorporated" floating point processors - a situation similar to a biological process a billion years ago, when "cells" incorporated photosynthetic "units", thus becoming plants...

    (b) We have "synthetic" and "natural" benchmarks. Every user (and manufacturer) is only interested in "natural" benchmarks. It is no use to me to know my Porsche can drive 200 mph when I live in an area with unplastered and bend country roads. So performance parameters are given in a multi-dimensional way: So and so many FLASH, so and so many counters, so and so many ADCs, so and so many address lines, so and so many DMA units.... Is this the right mixture for you? Then compare the price tags and you will find that we are the cheapest.

    I see no way to "linearize" smile.gif this. The most common way is to use a value vector and compute the scalar product. But there is no aggreed value vector...

    (c) Some processors habe special abilities not best described by standard synthetic benchmarks, the best known are DSPs. (As I have not seen anything in this forum indicating to someone having worked with a TMS32, a "Blackfin" or a "Shark" I shall not elaborate on that further..)
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-04 09:34
    rjo_ said...
    You know that I am a big fan of yours
    yeah.gif
    said...
    First, you need to program it... store everything to the eeproms and then hit all of the reset buttons at the same time[noparse]:)[/noparse]
    It was already more advanced: I used a daisy chain reset: The first processor was released by a switch, after it had loaded its program, it released the next one, etc. : one EEPROM only!

    I just had a look: I plunderd 3 of the 4 chips for other purposes... I think I shall do something with it around christmas again...
    said...
    ... you actually tried to do something that didn't work? - What?
    I could send data blocks. But I should have to (re-) invent all higher protocol layers! This is what I have written about at least four times now: I need:
    - XSTARTCOG: Installing a machine program on a remote chip's COG
    - XRDLONG: Reading a word from a remote chip's HUB memory
    - XWRLONG
    - XLOCK

    This is straightforward but will take some weeks serious work, I was distracted by other things, and thought that someone else would come up with something similar during the next months smilewinkgrin.gif

    Post Edited (deSilva) : 11/4/2007 9:39:17 AM GMT
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-04 09:58
    rjo_ said...
    I found the dogmatic proclaymation(sic[noparse]:)[/noparse] about programmers not needing to understand processors really interesting... where do ideas like this come from?

    So, if the programmer doesn't really need to understand the processor... then all he really needs to understand are the abstractions, right? And which abstractions might those be? And if there is a problem with the abstraction... and that abstraction becomes the holy grail of ideas, then where are we?

    Rich, I think it was not meant "dogmatic", but stated as an accomplishment of the late 50th, that tying yourself to a specific machine or architecture will not further general throughput of ideas and algorithms.

    There is no doubt you can do many good work with assembly language, but it takes masses of time. This was the first "SW crisis" and it was solved by compilers. (And by concepts as "arrays" in contrast to the error prone indexing of memory cells - sorry I couldn't resist this side remark...)

    Dijkstra is always quoted: "Abstraction is our only mental aid to master complexity."

    So this is generally approved on, and we know the perfomance loss is just a small factor (consider SPIN: the factor is 80...)

    A good question is, how to chose the "right" or the "best" abstractions or metaphors. The 90th came up with OO, which is fine, though not generally accepted by hardcore hackers who still think Real Programmers need a GOTO.

    The basic abstraction for medium size parallel chunks is "process" ("thread" or "task", depending on their encapsulation level), that's the way we use our COGs. There are also some low level synchronisation means (semaphores, monitors,..).

    The main challenge is still fine granular parallelism: How to communicate to the 900 processors on my chip that I want to compute the square of a 30x30 matrix? Telling them will take more time than to do it yourself....

    Post Edited (deSilva) : 11/4/2007 11:06:56 AM GMT
  • hippyhippy Posts: 1,981
    edited 2007-11-04 12:37
    rjo_ said...
    I found the dogmatic proclaymation(sic[noparse]:)[/noparse] about programmers not needing to understand processors really interesting... where do ideas like this come from?

    From the notion that it's not necessary to understand everything about everything to do something. A builder doesn't need to have much understanding of physics to knock a nail in with a hammer nor know the mechanics of a nail gun to use that. They don't need to have a deep understanding of what types of bricks and blocks there are, just know the qualities of those so they can choose the right ones during construction.

    There are programmers who do need to understand the underlying processor, but the majority do not. I don't think it was such a dogmatic proclamation, just good advice and truth for most programmers.
    rjo_ said...
    So, if the programmer doesn't really need to understand the processor... then all he really needs to understand are the abstractions, right? And which abstractions might those be?

    Abstractions are all that most programmers need.

    When you write in a high level language you do not really care what the architecture of the hardware is. Spin is an abstraction; y := m*x+c - you do not need to know the underlying processor to do that, you don't need to know how a processor achieves multiplication or addition, just that it delivers the result you want. You don't really care if it's a 1-bit or 128-bit processor. The only thing you care about is that the result is as you want it, and sometimes how fast the hardware runs so you can determine how much you can do in a particular time frame.

    You don't need to know if a processor has eight separate cogs or that it's a one cog processor presenting itself as eight cogs in the same way. What's on offer is exactly the same no matter how it's achieved.

    Even at lower levels you can still rely on abstractions. Do you really care how the video drivers works or what processor hardware there is to help there ? No, all you really need to know is how to set it running and how and where to put data to display.

    Assembly opcodes are themselves abstractions of what the underlying processor hardware is. They indicate functionality but do not reveal exactly how the processor achieves the end result.

    In some cases it is necessary to work at lower levels of abstraction, particularly bit-banged device drivers, where timing becomes important, but that's a particular group of programmers not the general case.

    DeSilva's aside on 'arrays' above is a great example; whether arrays exist or not, whether what a programmer says is an array is or isn't, most programmers are generally quite happy to take an abstraction that they are using arrays and go with it.
  • deSilvadeSilva Posts: 2,967
    edited 2007-11-04 14:00
    rjo_ said...
    And if there is a problem with the abstraction... and that abstraction becomes the holy grail of ideas, then where are we?
    I shall not forget to comment this your very disquieting question.

    It applies to science, politics, religion as well...

    There are two kinds of abstraction: analytic (classifying and labeling observations) and synthetic (setting standards)

    Analytic abstractions are used to understand the world, synthetic abstractions are needed to sort out the mess you made yourself.
    The best understood analytic abstractions can be found in natural science, called "first principles". The method of science since the days of Galilei consists of to merciless drop obsolete abstractions and substitute them by more suiting ones. The criteria are the outcomes of experiments....

    The best known synthetic abstractions could be found in mathematics (at least in the days of Kant). Do not ask what a complex number "really" is.. It is exactly what we have defined. The uniformity of such agreements among mathematiciens is astonishing - or should I say: worrying?

    But we have better examples nowadays: Just "standards": the metric systems, e.g. Do not ask why we use 0.254 mm pitch, M3, M4, M4 screws, 110 Volts, or NTSC video .. That's neither right nor wrong, but (in) your country smile.gif

    It might make your life easier or not ("But I need another kind of screw!" "No chance!")

    In the golden days of computer science, everything was possible, the number of exotic programming languages was uncountable: SNOBOL, LISP, ICON, SIMULA, APL, PROLOG, SMALLTALK, ERLANG, HASKEL .....

    They tried out new ways to program, as others tried out new ways to live at that time ... Most programmers nowadays are restricted by more standardized concepts, as to be found in Java.

    But that still is neither good nor bad, right or wrong. I should like to compare it to learning your mother tongue... But don't forget that there are more options...
Sign In or Register to comment.