Shop OBEX P1 Docs P2 Docs Learn Events
Prop-2 Release Date and Price - Page 19 — Parallax Forums

Prop-2 Release Date and Price

11415161719

Comments



  • I suppose but it seems really wasteful of time and effort to write a compiler in assembler for two different architectures. It also will make it much more difficult to maintain the two in parallel. What I hoped would happen was to have someone work in tandem with Chip in a co-development environment where a software guy (maybe Roy) writes the tools and Chip designs the hardware and specifies the Spin language specification.

    Agreed, and I think it will work this way because the P2 opcodes will be defined a long time before the P2 silicon sign off.
    That means Chip needs to focus on the P2 verilog design  sign-off critical path, and cannot be distracted "writing tools"

    P2 opcodes will likely be defined before the first (Alpha) P2 FPGA images, because the smart pins coding needs to come after the Core opcodes.



    I don't think this is true. I don't think we'll get anything before Chip has a complete P2 design and FPGA image long with a PNut.exe that will compile code for it. If we were going to see a P2 instruction set in advance of the FPGA image, we would have it by now.
  • jmgjmg Posts: 15,175
     If we were going to see a P2 instruction set in advance of the FPGA image, we would have it by now.

    ? The FPGA  core is still in flux today, so we cannot expect P2 Opcodes until after the Core is done, and then the final FPGA image has much to add after that point is reached.
    Of course, the Core milestone will have some minimal ASM that was used to test that far, but a full compiler suite it is not.

  •  If we were going to see a P2 instruction set in advance of the FPGA image, we would have it by now.

    ? The FPGA  core is still in flux today, so we cannot expect P2 Opcodes until after the Core is done, and then the final FPGA image has much to add after that point is reached.
    Of course, the Core milestone will have some minimal ASM that was used to test that far, but a full compiler suite it is not.



    My prediction is that the instruction set will be in flux until it is cast in silicon. Perhaps it is best to wait until then to start developing tools.
  • There won't be any harm at all.

    And he's going to do Verizon and pasm so the ROM makes sense and for us to be able to build on the test image. SPIN may well wait, but those two work together, as they should.

    Which means we get PASM and most likely, C to start with. What's not to like?

    As far as SPIN goes, it is one with PASM and I seriously don't want that tight, elegant environment to get messed up, and I believe it will too, given the kind of input and suggestions I saw.

    Besides, it is Chip's language. He should define it, and then others can do what they will.




  • There won't be any harm at all.

    And he's going to do Verizon and pasm so the ROM makes sense and for us to be able to build on the test image. SPIN may well wait, but those two work together, as they should.

    Which means we get PASM and most likely, C to start with. What's not to like?

    As far as SPIN goes, it is one with PASM and I seriously don't want that tight, elegant environment to get messed up, and I believe it will too, given the kind of input and suggestions I saw.

    Besides, it is Chip's language. He should define it, and then others can do what they will.






    Probably best to just use PNut.exe to play with the instruction set while the dust settles and expect Spin and C later when we have real silicon.
  • jmgjmg Posts: 15,175


    My prediction is that the instruction set will be in flux until it is cast in silicon. Perhaps it is best to wait until then to start developing tools.

    ? but that level of caution/conservatism means Ken gets very little test coverage of the FPGA image, and he is never confident enough to sign off for silicon....

    Sure, I'd expect possible small late changes to opcodes as testing uncovers issues, but I would not expect wholesale deviations.
    Mostly, I would expect verilog changes to make the opcodes work-as-expected in all use cases.


  • My prediction is that the instruction set will be in flux until it is cast in silicon. Perhaps it is best to wait until then to start developing tools.

    ? but that level of caution/conservatism means Ken gets very little test coverage of the FPGA image, and he is never confident enough to sign off for silicon....

    Sure, I'd expect possible small late changes to opcodes as testing uncovers issues, but I would not expect wholesale deviations.
    Mostly, I would expect verilog changes to make the opcodes work-as-expected in all use cases.


    That's not what happened the last time an FPGA image was released. 
  • As for "tools" being in quotes, that's part of the problem right there, and precisely why I will push back on this, until the thing is done.

    The Propeller, PASM and SPIN are one unit.  They are designed as one working unit, and that's why the P1 is what it is.  It is what it is because the usual, expected path was not taken.

    And P2 is going to have those same attributes.  We've demonstrated that "pro" or "industry" or "standard" type tools can be made, so no worries.  They will get made.

    But so should the P2 in the way it's designed.

    Re: ops and the FPGA.

    I think we will see something closer to a complete design, and not something so interactive this time around.  It won't be, "implement the ops", but it also won't be the radical changes we saw happen on the "Hot" chip either.  Some testing, like on the Smart Pins, may well highlight problems or more optimal approaches.  The changes will involve both the instructions and the Verilog.


  • As for "tools" being in quotes, that's part of the problem right there, and precisely why I will push back on this, until the thing is done.

    The Propeller, PASM and SPIN are one unit.  They are designed as one working unit, and that's why the P1 is what it is.  It is what it is because the usual, expected path was not taken.

    And P2 is going to have those same attributes.  We've demonstrated that "pro" or "industry" or "standard" type tools can be made, so no worries.  They will get made.

    But so should the P2 in the way it's designed.

    Re: ops and the FPGA.

    I think we will see something closer to a complete design, and not something so interactive this time around.  It won't be, "implement the ops", but it also won't be the radical changes we saw happen on the "Hot" chip either.  Some testing, like on the Smart Pins, may well highlight problems or more optimal approaches.  The changes will involve both the instructions and the Verilog.




    I guess we'll see what happens, hopefully soon.
  • This time, we do have a team working with Chip.  Getting to sign off on silicon won't be as difficult IMHO.  The change scopes won't be so nuts, and those guys can help nail things down.

    I'm optimistic we can get to a commit.  Once that happens, like David says, other projects can start. 

    Chip is going to have to give that commit on the instructions, IMHO.

    And if not, then we get PASM.  Other things to follow. 
  • Yeah, hopefully soon indeed!


  • jmgjmg Posts: 15,175


    That's not what happened the last time an FPGA image was released. 

    Hehe, well, yes, you are right there.
    I was assuming lessons had been learned from history...
    The older P2 has provided a useful framework for what can, and cannot, be achieved, so I am more optimistic the level of Flux will be less... maybe that is misplaced optimism ?
  • This time, we do have a team working with Chip.  Getting to sign off on silicon won't be as difficult IMHO.  The change scopes won't be so nuts, and those guys can help nail things down.

    I'm optimistic we can get to a commit.  Once that happens, like David says, other projects can start. 

    Chip is going to have to give that commit on the instructions, IMHO.

    And if not, then we get PASM.  Other things to follow. 


    There is a team working with Chip? Who is on it?
  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-14 04:34
    Ken mentioned them.  They are helping with the design, and the long delay has been getting the custom blocks integrated with their workflow so that Chip can write Verilog and export something to them for analysis, verification, etc...  As I understand it, they aren't on the P2 CPU design path, but are on the layout, verify, analyze one.

    He's been either on Verilog, or helping to sort out the workflow.  I think the workflow is just about done, based on the last comments.  Maybe another go around or two.  Then it's game on for Verilog, and images for us soon after that.

    Most of the stuff we asked about was, "in there" meaning, a lot of the design is there now, IMHO. 

    I can't quite remember, but I think I remember them being closely associated with, or part of the foundary team.


  • jmgjmg Posts: 15,175
     Some testing, like on the Smart Pins, may well highlight problems or more optimal approaches.  The changes will involve both the instructions and the Verilog.



    True. Another thing that drives small 0pcode improvements is the development of Compilers and tool suites. If you defer all of that, until the chip is in silicon, then the overall chip design is sub-optimal.
  •  Some testing, like on the Smart Pins, may well highlight problems or more optimal approaches.  The changes will involve both the instructions and the Verilog.



    True. Another thing that drives small 0pcode improvements is the development of Compilers and tool suites. If you defer all of that, until the chip is in silicon, then the overall chip design is sub-optimal.


    If matching the needs of compilers was a goal then it should have happened long before we have a nearly final design.
  • Which is precisely why I see PASM being designed right as the chip is at a minimum, and SPIN closely after that.

    We gotta be careful on that last bit though.  A lot of one off, speed up type use cases, got added.  Tons of instructions, and heat decoding them, among many other things.  Ideally, most of that carried forward.

    But I'm with you on cautious optimism.  I think "hot" showed us a lot, and I know it showed Chip a lot.  He changed a lot and appears to have committed to it, and if I'm right about the state of things, did the core "must have" features in tandem with that team and or data and improved design rules he got from the analysis done so far.


  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-14 04:40
    >>If matching the needs of compilers was a goal then it should have
    happened long before we have a nearly final design.

    Yeah, that's RISC V :)

    This one does not have those same goals.  People are / will be encouraged to write PASM.  Good.  :)

    Didn't we highlight a few things C / gcc really needs last go around?  I strongly doubt those will be ignored.




  • Which is precisely why I see PASM being designed right as the chip is at a minimum, and SPIN closely after that.

    We gotta be careful on that last bit though.  A lot of one off, speed up type use cases, got added.  Tons of instructions, and heat decoding them, among many other things.  Ideally, most of that carried forward.

    But I'm with you on cautious optimism.  I think "hot" showed us a lot, and I know it showed Chip a lot.  He changed a lot and appears to have committed to it, and if I'm right about the state of things, did the core "must have" features in tandem with that team and or data and improved design rules he got from the analysis done so far.




    What team?
  • I just can't quite recall.  It's in his last few posts.  Maybe he didn't name them.  They entered the picture a while back, sometime after Beau moved on to his new gig.

  • I just can't quite recall.  It's in his last few posts.  Maybe he didn't name them.  They entered the picture a while back, sometime after Beau moved on to his new gig.



    I see. I must have missed that.
  • AribaAriba Posts: 2,690
    edited 2015-07-14 05:47

    ...Sure, I'd expect possible small late changes to opcodes as testing uncovers issues, but I would not expect wholesale deviations.
    Mostly, I would expect verilog changes to make the opcodes work-as-expected in all use cases.


    That's not what happened the last time an FPGA image was released. 

    No? As far as I remember, last time the first FPGA image was released in early December, and the Shuttle-Run was planned for end of January, so we had a bit more than a month to find failures. Chip made 2 or 3 Verilog changes and then Parallax missed the deadline for the shuttle run by one day or so.The first shuttle run happens then in March (the next possible date), but in the meantime no further changes of the Verilog were welcome.As we know the chips did not work, otherwise we would work with a P2  since two or three years (with 128kB RAM, no hardware tasks, and no Hubexec).For sure we will have found alot of tricks how to make that the chip gets not so hot. Something like: Don't use the video generator while you have more than 5 counters running ...
    Andy
  • ozpropdevozpropdev Posts: 2,793
    edited 2015-07-14 05:58
    @potatohead + David Betz
    I believe the "team" is Treehouse Designs
    http://www.treehousedes.com/
  • Yes, I think you are right!  I've been to that page before.

    Thanks!


  • @potatohead + David Betz
    I believe the "team" is Treehouse Designs
    http://www.treehousedes.com/


    Yes, that is correct. I knew about the outside design firm. I thought you were talking about a group of forumists who were working with Chip. Sorry for the confusion.
  • There won't be any harm at all.

    And he's going to do Verizon and pasm so the ROM makes sense and for us to be able to build on the test image. SPIN may well wait, but those two work together, as they should.

    Which means we get PASM and most likely, C to start with. What's not to like?

    As far as SPIN goes, it is one with PASM and I seriously don't want that tight, elegant environment to get messed up, and I believe it will too, given the kind of input and suggestions I saw.

    Besides, it is Chip's language. He should define it, and then others can do what they will.








    I don't understand why Spin will wait. I assume P2 Spin will be almost identical to P1 Spin. What kind of changes are we talking about for Spin? The bytecodes may be different, and there may be a few new intrinsics that support new features, but I would guess that P2 Spin will not be much different from P1 Spin.
  • There won't be any harm at all.

    And he's going to do Verizon and pasm so the ROM makes sense and for us to be able to build on the test image. SPIN may well wait, but those two work together, as they should.

    Which means we get PASM and most likely, C to start with. What's not to like?

    As far as SPIN goes, it is one with PASM and I seriously don't want that tight, elegant environment to get messed up, and I believe it will too, given the kind of input and suggestions I saw.

    Besides, it is Chip's language. He should define it, and then others can do what they will.








    I don't understand why Spin will wait. I assume P2 Spin will be almost identical to P1 Spin. What kind of changes are we talking about for Spin? The bytecodes may be different, and there may be a few new intrinsics that support new features, but I would guess that P2 Spin will not be much different from P1 Spin.

    With all of the extra hub RAM and hubexec, I wonder if Chip will consider native code generation instead of bytecodes for Spin2?
  • Didn't he mention this as either an option, or as the default going forward?  I'm pretty sure it was linked to in line assembly discussion.  At first, it was going to be "the snippet" which gets sucked into the COG, then executed as part of a SPIN program.  Then, with HUBEX, it ended up "why not compile SPIN to PASM?" with in line simply becoming more like true in line assembly is.

    Sure would be nice.  If we really do need tiny SPIN programs, that's an option down the road. 
  • evanhevanh Posts: 16,032
    edited 2017-10-03 15:10
    evanh wrote: »
    There is one significant difference with Linux, and that's who's using it and why it got investment. Linux could be called a last stand collaborative response to the evolving ecology (read, Wintel) of the eighties and nineties. With the GPL being a critical factor in the stability of the collaboration.

    PS: It's a beautiful example of the prisoners dilemma with a rule that fosters hanging in there.
    evanh wrote: »
    Heater. wrote: »
    evanh,
    What do you mean by "investment" in Linux? As far as I know there is none worth talking about in a monetary sense.

    What there is is a lot of big companies contributing development effort into Linux. That is to say source code.
    Labour investment of course, source code doesn't need anything else. The best ones even put it on their books as part of R&D presumably.

    I have no idea about that "Prisoner's Dilemma" thing.

    The GPL is where "The Prisoner's Dilemma" kicks in. Without the GPL, Linux would have just languished like the myriad of BSDs before it. Various companies picking over it's bones for isolated needs.

    Reviving an old comment I feel I didn't explain very well at the time - I've bumped into a nice descriptive blurb that concisely says what the Prisoners Dilemma addresses - "Selfishness beats altruism within groups. Altruistic groups beat selfish groups. Everything else is commentary." - 2007, D.S Wilson/E.O Wilson.

    What the GPL provides is a rule, based in copyright law, that enables a collaborative level playing field. With collaboration being a form of altruism.


    PS: The Prisoners Dilemma is an old Cold War era piece of game theory. I've only ever heard it described as a simplistic mind game but the above quote gel'd well with gist of it.
  • Heater.Heater. Posts: 21,230
    That all sounds overly complex to me. How about simple economics like so:

    X has a business that could be made more efficient, scalable, profitable with computers.

    Y has a different non-competing business that could be made more efficient, scalable, profitable with computers.

    Both X and Y want more profit so they both want computers. What to do?

    1) They each invest time and money into building the computer systems they need. Like the Lyons company did back in the day: http://www.telegraph.co.uk/technology/news/8879727/How-a-chain-of-tea-shops-kickstarted-the-computer-age.html That's fine but it's kind of expensive, a big investment. That eats into any potential profit gains.

    2) They each go to a third party Z who builds computers and software. Basically the model in place with IBM for decades, then MS and the like. That's fine, hopefully cheaper as Z can do the development once and the cost is shared among Z's customers. Problem is Z gets too big and greedy and starts to have control of everything. You become dependent on Z forever.

    3) X and Y wake up and realise that if they worked together on the common parts of the system that they need that would be cheaper and they would maintain control themselves. Enter Free and Open Source Software inc. Linux and the rest of the stack.

    Is that selfish or altruistic?

    It's selfish in that X and Y want to jack up their profits. It's altruistic in that they are helping each other. Does it make any difference what we call it?

    I guess D.S.Wilson said it right.

    I started to think recently that the "Free" in Free Software licences is not referring to one being free to use and distribute the software how you like, given you make the source available. It's actually about the software itself being free from you locking it up, free to roam the world and propagate and mutate, as it does. It's an evolutionary advantage for the software.

Sign In or Register to comment.