Shop OBEX P1 Docs P2 Docs Learn Events
Propeller II - Page 5 — Parallax Forums

Propeller II

1235745

Comments

  • Heater.Heater. Posts: 21,230
    edited 2012-08-08 08:33
    Phil,

    The AVR may be a good example. I'm not qualified to tell how well they actually succeeded in building the best 8 bit C engine.

    However I'm pretty sure that the result of a good C engine would also be good for Pascal, ADA and so on. In fact there is avr-ada. So I would not be so quick to blame C.

    Of course there is a somewhat philosophical point there. If your aim is to squeeze as much functionality in to a small memory space 8 bit mcu and have it run at reasonable speed BUT without users having to resort to assembler then it makes sense to optimize the architecture for that, at the risk of making assembler level programming harder.
  • David BetzDavid Betz Posts: 14,516
    edited 2012-08-08 08:34
    I believe it was an iterative process:

    1. Start with an architecture.
    2. Write a C compiler for it.
    3. Compile a bunch of C programs.
    4. Analyze the compiled machine code to see which instructions and register/memory accesses got used the most.
    5. Refine the architecture to make sure the most common things get done the quickest and with the smallest footprint.
    6. Go back to #2.

    -Phil
    If that's what they did then that seems like a reasonable approach provided that they had experienced people implementing the compilers at each stage.
  • David BetzDavid Betz Posts: 14,516
    edited 2012-08-08 08:36
    That depends on the archtecture. Some micros still have INC and DEC instructions, which are shorter and require less time to execute than the equivalent ADD and SUB.

    -Phil
    That is very true and a good compiler will use those instructions (INC and DEC) to generate code for expressions like "foo = foo + 1". I know GCC will. I suspect most others will as well.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-08-08 08:42
    David,

    You beat me to it. I had already deleted that post when I realized what you just responded. :)

    -Phil
  • David BetzDavid Betz Posts: 14,516
    edited 2012-08-08 08:51
    David,

    You beat me to it. I had already deleted that post when I realized what you just responded. :)

    -Phil
    Sorry! :-)
  • jazzedjazzed Posts: 11,803
    edited 2012-08-08 08:52
    So, for a new customer interested in Propeller II, they visit this thread and find out Propeller II is about language wars.

    Please explain how this helps Parallax.
  • 4x5n4x5n Posts: 745
    edited 2012-08-08 08:56
    David Betz wrote: »
    That is certainly true with the auto-increment and decrement and assignment operators. Those are really unnecessary now that compiler optimization techniques have improved. However, those constructs are often easier to understand than the more verbose ones. I'd say that "++foo" says "increment foo" to me better than "foo = foo + 1". But, those shorthands are not needed anymore for speed. I'm sure GCC compiles the same code for both expressions and so would any other modern compiler.

    I thought the pre/post increment and decrement was added to C because the cpu on the PDP-11/7 had that as an assembly instruction.
  • David BetzDavid Betz Posts: 14,516
    edited 2012-08-08 08:57
    jazzed wrote: »
    So, for a new customer interested in Propeller II, they visit this thread and find out Propeller II is about language wars.

    Please explain how this helps Parallax.
    I suppose it doesn't. I'd be happy to have someone go back through this thread and delete all of the comments I posted about languages as long as all of the comments other people posted are deleted as well.
  • Heater.Heater. Posts: 21,230
    edited 2012-08-08 09:04
    Bah, no war here. Just an friendly debate. But yes, let's skip that and get back to Prop II things.
  • David BetzDavid Betz Posts: 14,516
    edited 2012-08-08 09:10
    Can someone point me to the latest official description of the Propeller II architecture including the instruction set and also the SDRAM interface? I think copies have been attached to various messages but the one on the main Parallax web site (http://www.parallax.com/Propeller2FeatureList/tabid/898/Default.aspx) doesn't contain a lot of the information that would be needed to start thinking about programming the P2. Where is the latest official spec?

    Thanks,
    David
  • 4x5n4x5n Posts: 745
    edited 2012-08-08 09:15
    jazzed wrote: »
    So, for a new customer interested in Propeller II, they visit this thread and find out Propeller II is about language wars.

    Please explain how this helps Parallax.

    The status of the prop II has been given by Chip not much more to talk about "on topic" :-)

    Personally I think we should let the language "war" drop or move it to another thread with a proper topic.
  • potatoheadpotatohead Posts: 10,261
    edited 2012-08-08 09:25
    http://www.digibarn.com/collections/posters/tongues/ComputerLanguagesChart-med.png

    I am very curious about the shackles argument, and really enjoy this discussion.

    The PASM additions are notable. REP makes for some HLL type functionality in PASM. One of the attractions C has is being able to deal with program control and expressions in a human readable way, where PASM is difficult. IMHO, PASM2 just got a little easier in this respect. Looking forward to that just as much as I am SPIN2, and C having the room to really take off!

    Re: Types

    Totally. Make them optional, and if it were me, on compile just inform the user about types declared. The system will just use the default type and state that, then the ones the user did following. Newbies can ignore that stuff and watch the blinking light, graphics, etc... Once they grow a little, or bump into some need, that list will make a lot of sense, and they will have seen it a bunch of times too.

    Re: New customer.

    They would come to see P1 is here, and that P2 isn't, but will be "soon." They also would see the creator of the chip interested in what some experienced developers have to say, as well as get some idea of what drives Chip to make chips the way he does, as well as what those experienced developers think about languages and computing too. None of that is bad.
  • SapiehaSapieha Posts: 2,964
    edited 2012-08-08 10:21
    Hi .

    From my end it is NOT any WAR --- only some real conclusions about -- Languages
  • Clock LoopClock Loop Posts: 2,069
    edited 2012-08-08 13:58
    cgracey wrote: »
    You can abuse the PLL via the video generator just the same as you could in Prop I to get the truly random WAITVID timing phenomenon.

    My question is: Can you abuse the loader (boot process) so much that one propII can piggy back another.
    (this means all replies during loader sequence from piggy back propII will go ignored)

    So if you go into propeller tool IDE and do a F10 (LOAD RAM) and have a single prop connected to the prop plug. But you take a secondary prop with its own crystal identical to the first, but you only connect its RX, to the prop plug's TX, so it can hear the prop plug, but cannot talk to it.

    This will allow you to load 2 props identically using a prop plug and the propeller tool IDE.

    I don't know if this ability of the prop to accept a program, even when you ignore its replies, was intentional, but it allows parallel programming.
    Its a much more reliable process when a master prop (instead of a prop plug) programs all slave props.


    I question this because with encryption comes unique chip response during boot or programming and any unique traffic specific to each chip means failure of programming sequence in a parallel programming circuit. Like the one I previously posted here.

    Unless a dummy programming mode switch could be used to get a future propII into a "accept and run complete program download as is, no questions asked" mode.

    This ability is useful if you are taking 50 prop chips and connecting them to the same circuit, and want a way to get them all programmed in less than a second, actually in the same time frame as a single prop chip's timing duration. If you have ever serial loaded more than 10 prop chips, you can understand that parallel loading is advantageous.

    So ... My question is: Can you abuse the loader (boot process) so much that one propII can piggy back another?
    (this means all replies during loader sequence from piggy back propII will go ignored)
    And not have the piggy back propII throw a fit because it didn't get unique responses during programming sequence?

    When chips come out with internals that allow unique encryption replies during programming sequence, (IDs in fuses), this kills the ability to parallel load the chips, unless the chips know they might not get a unique reply, and are fine with it, as long as it follows protocol....
  • cgraceycgracey Posts: 14,155
    edited 2012-08-08 14:09
    Sapieha wrote: »
    Hi .

    From my end it is NOT any WAR --- only some real conclusions about -- Languages

    That's how I feel about it, too.

    About the "shackles" comment: C is *the* embedded language of our era, which has reflexively defined in people's minds what computing is all about. How apt have processor designers been for the last 20 years to create systems which are not given initial and heavy consideration to how C will run on them, with the commensurate awareness of practical compiler limitations? Is C an adequate expression medium for all that is possible? Is any procedural language, for that matter? Spin is just as one-dimensional as C is - it just runs on 1.5-dimensional hardware (not nearly 2-D, but multiple 1-D). I think a big breakthrough is needed that extends deep into the fabric of the hardware, getting us beyond the 1-D physical and mental trap that we are all in. Then, expression can evolve, too. For now, we are in a symbiotic language/hardware lock-up.
  • David BetzDavid Betz Posts: 14,516
    edited 2012-08-08 14:17
    cgracey wrote: »
    That's how I feel about it, too.

    About the "shackles" comment: C is *the* embedded language of our era, which has reflexively defined in people's minds what computing is all about. How apt have processor designers been for the last 20 years to create systems which are not given initial and heavy consideration to how C will run on them, with the commensurate awareness of practical compiler limitations? Is C an adequate expression medium for all that is possible? Is any procedural language, for that matter? Spin is just as one-dimensional as C is - it just runs on 1.5-dimensional hardware (not nearly 2-D, but multiple 1-D). I think a big breakthrough is needed that extends deep into the fabric of the hardware, getting us beyond the 1-D physical and mental trap that we are all in. Then, expression can evolve, too. For now, we are in a symbiotic language/hardware lock-up.
    Okay, that makes sense. You're not really singling out C but talking about any procedural language. C is just the most commonly used one at the moment. I can agree with that!
  • cgraceycgracey Posts: 14,155
    edited 2012-08-08 14:18
    Clock Loop wrote: »
    My question is: Can you abuse the loader (boot process) so much that one propII can piggy back another.
    (this means all replies during loader sequence from piggy back propII will go ignored)

    Yes!!! The download protocol for Prop II is simplistic in that way. You could downloaded to all in parallel, then have each differentiate its function by either an in/out-in/out-... enumeration chain or unique ID's from each chip's fuse bits (with fuse key bits being identical among chips, for board-wide parallel authentication during loading).
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-08-08 15:00
    Regarding other possible programming paradigms, there's always dataflow, although I suspect it requires more (and smaller) functional elements than the Prop embodies to be truly effective. In fact, I don't know of any actual implementations to point to as examples, except maybe spreadsheets.

    Someone mentioned Erlang in this or another thread (I'm starting to lose track), so functional programming could be another productive avenue to explore.

    -Phil
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-08-08 15:12
    Seems to me that's what drawing RTL for a CPLD/FPGA does...
  • KyeKye Posts: 2,200
    edited 2012-08-08 16:32
    I think I understand now...

    Okay, here's a talk from someone who wants to go a different direction: http://www.c-eda.org/index.php?menuphp=menu_dss&mainpage=distinguished

    Steve Teig, President and CTO of Tabula (www.tabula.com). He gave a talk on "beyond von neuman computing" at the design automation conference (DAC), 6/2010.

    Here's a link to the video: http://www.c-eda.org/IEEE-CEDA-DAC-061510/IEEE-CEDA-DAC-061510.html.

    Thanks,
  • Cluso99Cluso99 Posts: 18,069
    edited 2012-08-08 16:37
    Yes, its not a language war, just discussions from each persons perspective.

    From my perspective, C is a complicated language whereas, something like basic is much simpler and the code is more readable. With modern intelligent compilers, the simpler and more readable format of basic with some of the abilities of C to do the grass root things that is complicated in basic would be a huge improvement. My dislike comes from all the hieroglyphics that programmers take great delight in using. It just looks like a jumbled mess and in many cases almost undecipherable. I find the choices of void, switch, and other ideosyncrasies of C to be more than offputting in a language. Basic is far easier to read in these instances. I find the begin/end far better than { and }. From this, you can see that I dislike some of the spin hieroglyphics too, but far less than C. I like the enforced indentation of spin, but I agree that some find this disturbing, so I would opt for being able to toggle it on/off at will.

    So, my issues are more to do with the language representation than the actual implementation aspects. I would have thought by now, over 40 years on, programming language syntax would be more common. If you recall, Basic took a lot from other languages, and I would say quite influenced by Pascal, to evolve to where it is today. No more horrid line numbers, goto's almost gone, structures added such as Pub/Public and Pri/ , global/local variables, return parameter, etc, etc.

    As I see it, I agree with the statement often applied to C, that C was developed by programmers who feared their jobs, so they made it complex and unreadable.

    And, I guess I should say, my perspective is an oxymoron one, because I prefer assembler over high level languages as I like to sit at the metal. However, I don't care for complex instruction sets. This is why I find the prop so refreshing, because of it's "regular" instruction set. I would have preferred the "other" endian (always get confused as to which is called which) and the instructions to be source followed by destination, but that is just me I guess.

    I do like the mix of spin and pasm. As with others, I would have loved double the cog space. We could have castly improved the spin language (both syntax and extensions) plus speed if only we had access to PropTool source. An open sourced one is in the pipe :)

    I achieved ~25% improvement in my spin interpreter, plus 30% overclocking (104MHz), resulting in a spin improvement of 60%.
    I should of course make the point that my interpreter was only possible because Chip published the (his) Interpreter - thanks Chip!

    By changing the decoding to a vector table in hub, this was much faster and freed up valuable cog space. By utilising this additional cog space, I was able to unravel Chip's code to make it faster. Unravelling the maths functions yielded a major improvement in execution speed. And thanks to others, I was able to improve the multiply, divide and sqrt code. There is still cog space available to unravel even more code, and improve the vector tables. During my development phase, I got fast overlays and my zero-footprint debugger working. Overlays could be added back for some existing and additional functions, such as floating point. So I see a mix of overlays and LMM for providing additional functions. What I lacked at the time was sufficient usable code to do instruction profiling. Most of my work was achieved without actually understanding much of the interpreter byte code.

    Dave modified the Interpreter (Chip's?) to add some LMM language extensions.

    All these things are still on the table and itching to be done, all on P1.

    P2 will give us, apart from big speed improvements, improved instructions to save valuable cog space, effective stack space in hub utilsing the video cluts, quad long hub access with auto-increment, and 2x hub cycle access. IMHO, overlays may prove superior to LMM and cache, if designed right.

    Certainly interesting times ahead :)
  • jmgjmg Posts: 15,173
    edited 2012-08-08 16:55
    Seems to me that's what drawing RTL for a CPLD/FPGA does...

    Yes, there is potential for Prop languages to draw from CPLD/FPGA, as those have natural 'true parallel' operation.

    The biggest challenge in the Prop Case, is the COG ceiling, which needs a quite different approach to the 'Big Iron' parallel processing.

    With COGs you need some means to tell any compiler what slice of code will run via LMM (or whatever), and what will compile to PASM and run truly independently and fast within a COG.

    I think Prop-GCC is doing quite well at targeting that problem.

    What I feel the Prop does need, is a better High Level Assembler, and if the C code generator can be made good enough, and user-controlable enough, perhaps that will do.

    Users do need a 'softer' jump between flexible and fast.

    Another already established language, that has usage-implicit parallel operation is the PLC languages in IEC61131.
    In a real PLC, they have some defined scan rate, and that is usually much faster than the system response times, so logically all decision trees occur in parallel.

    A Prop could pack a tiny Kernal into a COG, and fit the rest of a moderately sized control program, in the same COG.
    It would be MUCH faster than any PLC, and you have up to 8 of these in one Prop.
    ( A good compiler would fill one COG, then move onto the next, to insulate the user from the LONG ceiling )

    A quick over view of IL is here
    http://www.61131.com/il.htm
    and a more expanded version is here
    http://www.automation-course.com/il-commands-in-alphabetic-order/
  • bruceebrucee Posts: 239
    edited 2012-08-08 19:42
    Back to language wars, this is an interesting site from a company who surveys language popularity.

    http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html
  • mindrobotsmindrobots Posts: 6,506
    edited 2012-08-08 20:03
    I can hear the CTOs now, "We've chosen ________ as our strategic development platform because it's the most popular...all the cool kids are using it!"
  • 4x5n4x5n Posts: 745
    edited 2012-08-08 21:07
    brucee wrote: »
    Back to language wars, this is an interesting site from a company who surveys language popularity.

    http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html

    Looks like C is walking away with it! :innocent:

    Seriously although I do like C I don't think it's the best language for the propeller. With multiple cogs I think that the prop lends itself better to an object oriented language. Back in a former life when a large part of my job involved writing machine control software for industrial process control applications. The processor of choice was first the 6809 on the MB of the RS color computer and when that was discontinued the 68hc11. For those processors I quickly gave up on using C and wrote the code in assembly. The C compilers I had to work with just didn't generate good code of the 6809 and did even worse with the 68hc11.
  • cgraceycgracey Posts: 14,155
    edited 2012-08-08 22:58
    Kye wrote: »
    I think I understand now...

    Okay, here's a talk from someone who wants to go a different direction: http://www.c-eda.org/index.php?menuphp=menu_dss&mainpage=distinguished

    Steve Teig, President and CTO of Tabula (www.tabula.com). He gave a talk on "beyond von neuman computing" at the design automation conference (DAC), 6/2010.

    Here's a link to the video: http://www.c-eda.org/IEEE-CEDA-DAC-061510/IEEE-CEDA-DAC-061510.html.

    Thanks,

    Here is the abstract from his talk:
    Mr. Steve Teig

    President and CEO of Tabula

    Abstract

    The "von Neumann architecture" for computers, invented mostly by Turing but popularized by the more famous von Neumann, has completely dominated computing for more than 65 years. It is a masterpiece of simplicity: readily implemented in hardware, easily understood by software developers, and amenable to compilation from a wide variety of programming languages. Unfortunately, it achieves its simplicity from the fundamental, non-physical assumption that reading from a memory location takes negligible, constant time independent of the size of the memory. Decades of innovation in computer architecture and compiler design for uniprocessors has masked some of the von Neumann computer's intrinsic latency. The power requirements for this disguise have become prohibitive, though, which has ended the long, exponential rise in uniprocessor clock frequency. Multi-core processors, the semiconductor industry's response, have the virtue that they can clearly be built, but no one knows how to program them! Further, they make the same negligible-latency assumptions as uniprocessors, but disguising that latency is now quadratically more difficult.

    This talk will show that highly useful yet non-physical oversimplifications such as the von Neumann architecture have numerous historical precedents from which we can learn. These examples suggest that a more physically aware, non-von Neumann machine could offer higher-performance and far more power-efficient computation. Next, we offer some thoughts on what such a machine might look like - hint: it is not an array of microprocessors! - and how one might program it. It is only by simultaneously approaching architecture, hardware, and software, seeing them as aspects of a cohesive whole as von Neumann and Turing both did, that we maximize our chances of going beyond von Neumann computing.

    This man is onto what I've been trying to articulate here, myself. His company's products seem to be time-multiplexed FPGAs, where the logic fabric is reconfigured and clocked 8 times per single 'user' clock. I've contemplated things similar, where the memory is completely smashed out and percolates up through the computational logic. It's like a sphere I can feel a partial side of, but can't grasp yet. It seems that these people have spent a good amount of time thinking about this and have come up with something physical that aims to compete against Xilinx' and Altera's big FPGAs. Their chips have over 1,000 pins, though, and I imagine would cost over $1,000, as Altera's parts can cost several $1,000's, themselves. So, this is not practical for most of us to employ in anything. In looking at their marketing, it seems that these devices are aimed at internet infrastructure applications, in which big FPGAs are already in use. So, it seems they are trying to do something new, but with an exact and esoteric market in mind.

    Kye, does the video link work for you? It errors out for me. Thanks for posting this.
  • SapiehaSapieha Posts: 2,964
    edited 2012-08-08 23:46
    Hi Chip.

    This Link to videos function for me on FireFox.

    http://www.c-eda.org/IEEE-CEDA-DAC-061510/IEEE-CEDA-DAC-061510.html

    cgracey wrote: »
    Here is the abstract from his talk:



    This man is onto what I've been trying to articulate here, myself. His company's products seem to be time-multiplexed FPGAs, where the logic fabric is reconfigured and clocked 8 times per single 'user' clock. I've contemplated things similar, where the memory is completely smashed out and percolates up through the computational logic. It's like a sphere I can feel a partial side of, but can't grasp yet. It seems that these people have spent a good amount of time thinking about this and have come up with something physical that aims to compete against Xilinx' and Altera's big FPGAs. Their chips have over 1,000 pins, though, and I imagine would cost over $1,000, as Altera's parts can cost several $1,000's, themselves. So, this is not practical for most of us to employ in anything. In looking at their marketing, it seems that these devices are aimed at internet infrastructure applications, in which big FPGAs are already in use. So, it seems they are trying to do something new, but with an exact and esoteric market in mind.

    Kye, does the video link work for you? It errors out for me. Thanks for posting this.
  • potatoheadpotatohead Posts: 10,261
    edited 2012-08-08 23:57
    It worked for me as well. Watching the talk right now. I second the thanks Kye. Very interesting!

    @Chip: Are you working through some web proxy or filtering software?
  • jazzedjazzed Posts: 11,803
    edited 2012-08-09 00:07
    I find great irony in needing to use Java to see that presentation :)
  • cgraceycgracey Posts: 14,155
    edited 2012-08-09 00:18
    potatohead wrote: »
    It worked for me as well. Watching the talk right now. I second the thanks Kye. Very interesting!

    @Chip: Are you working through some web proxy or filtering software?

    Yes, I shut it down and am now able to watch it. Thanks!
Sign In or Register to comment.