Shop OBEX P1 Docs P2 Docs Learn Events
New Spin - Page 8 — Parallax Forums

New Spin

1568101136

Comments

  • Has the idea of writing the official Spin 2 compiler in Spin 2 for x86 been discussed properly yet? If a least-common-denominator version of Spin 2 that can run on things other than the Propeller is going to be written, then it should be self hosting. What would be the best way to bootstrap such a thing?
    I tried to suggest that a while back but the idea didn't get any traction.

  • Heater.Heater. Posts: 21,230
    What do you mean self hosting?

    Normally that implies that a compiler for a language, Spin in this case, is written in the language to be compiled, Spin again.

    I don't recall anyone has ever discussed writing a Spin compiler in Spin. No matter what processor it targets.

  • Heater. wrote: »
    What do you mean self hosting?

    Normally that implies that a compiler for a language, Spin in this case, is written in the language to be compiled, Spin again.

    I don't recall anyone has ever discussed writing a Spin compiler in Spin. No matter what processor it targets.
    See what I mean? It got no traction. :-)
    Anyway, why couldn't a compiler be written in Spin?

  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2017-02-22 22:42
    David Betz wrote:
    Anyway, why couldn't a compiler be written in Spin?
    It's been done: http://www.sphinxcompiler.com/ despite Spin's rather weak string primitives.

    -Phil
  • Heater.Heater. Posts: 21,230
    edited 2017-02-22 22:44
    I guess anything can be written in Spin. Even a Spin compiler. Turing completeness and all that.

    Has anyone seriously suggested that?

    Sounds about as much fun as writing a hello world program in Smile https://esolangs.org/wiki/Smile


  • Has the idea of writing the official Spin 2 compiler in Spin 2 for x86 been discussed properly yet? If a least-common-denominator version of Spin 2 that can run on things other than the Propeller is going to be written, then it should be self hosting. What would be the best way to bootstrap such a thing?

    I think the idea is that the Spin 2 compiler should be able to run on multiple platforms, but produce code only for P2 (and perhaps P1). That said, spin2cpp does have some support for producing platform independent C code, so in principle we could extend spin2cpp to support Spin 2 and then write a Spin 2 compiler in Spin 2 and compile it for the PC.

    The benefit is that it's a good way to expose any weaknesses in the Spin 2 data structures and capabilities (a compiler is a good exercise of that sort of thing). The drawback is that we kind of end up with two implementations, the initial bootstrap one (spin2cpp) and then the final one (Spin 2). Also, we'd have to support multiple code generators if we wanted the "native" Spin 2 to support x86.

    Eric
  • cgraceycgracey Posts: 14,206
    edited 2017-02-22 23:21
    David Betz wrote: »
    Heater. wrote: »
    What do you mean self hosting?

    Normally that implies that a compiler for a language, Spin in this case, is written in the language to be compiled, Spin again.

    I don't recall anyone has ever discussed writing a Spin compiler in Spin. No matter what processor it targets.
    See what I mean? It got no traction. :-)
    Anyway, why couldn't a compiler be written in Spin?

    Funny this should come up. I realized last night that Spin2 call/data stacks should remain in the hub, as they do with Spin1. The reason is that there is no practical limit, then, on the recursion that would occur in an RPN expression parser, which is part of a compiler. The current stack frame could be in cog registers, but once a call occurs, it is sent to hub via SETQ+WRLONG and the new frame is now in the cog registers. When a return occurs, the lower stack frame is copied into the cog registers via 'SETQ+RDLONG'. This way, we get big stacks and fast local operations.
  • jmgjmg Posts: 15,175
    Has the idea of writing the official Spin 2 compiler in Spin 2 for x86 been discussed properly yet? If a least-common-denominator version of Spin 2 that can run on things other than the Propeller is going to be written, then it should be self hosting. What would be the best way to bootstrap such a thing?
    The self-hosting idea has merit, but as you have seen above, the mention of 'for x86' generates a reflex reaction.
    Another means to self-host, is to have Spin2 emit Byte Codes, such as WebAssembly, or asm.js as possible examples.
    That would take longer, but it does bring portable to the table, and it would mean Spin could code for anything that supported those byte codes.

  • Heater.Heater. Posts: 21,230
    Certainly a recursive decent parser can use an endless amount of stack. When the code to be compiled makes a call, which makes a call, which makes a call....
    The current stack frame could be in cog registers, but once a call occurs, it is sent to hub via SETQ+WRLONG and the new frame is now in the cog registers.
    Doesn't that break things when the top level function passes a pointer/reference to lower level functions, to lower level functions....

    How is that pointer valid at the bottom of the pile?


  • cgraceycgracey Posts: 14,206
    edited 2017-02-22 23:35
    What about floats?

    In my experience, single-precision (32 bits) is a little paltry, having only 24 ('1'+23) bits of mantissa. That's about 64ppb accuracy.

    Double-precision, on the other hand, is plenty rich, but 64 bits.

    Would it be too weird to have whole+fraction fixed-point types of 16+16 bits and 32+32 bits? They could be added and subtracted directly, without unpacking. Multiplying and dividing would be direct, too, but require result shifting, which is no big deal. Maybe even having settable-point types, like 8.24 bits, would be good. Their 'whole.fraction' issue would only come up when shifting multiplication and division results.
  • cgraceycgracey Posts: 14,206
    Heater. wrote: »
    Certainly a recursive decent parser can use an endless amount of stack. When the code to be compiled makes a call, which makes a call, which makes a call....
    The current stack frame could be in cog registers, but once a call occurs, it is sent to hub via SETQ+WRLONG and the new frame is now in the cog registers.
    Doesn't that break things when the top level function passes a pointer/reference to lower level functions, to lower level functions....

    How is that pointer valid at the bottom of the pile?


    That's too complex for me to think about, but I don't think it's an issue.
  • jmgjmg Posts: 15,175
    ersmith wrote: »
    I think the idea is that the Spin 2 compiler should be able to run on multiple platforms, but produce code only for P2 (and perhaps P1). That said, spin2cpp does have some support for producing platform independent C code, so in principle we could extend spin2cpp to support Spin 2 and then write a Spin 2 compiler in Spin 2 and compile it for the PC.

    Sounds a good idea, as much of that already exists.
    ersmith wrote: »
    The benefit is that it's a good way to expose any weaknesses in the Spin 2 data structures and capabilities (a compiler is a good exercise of that sort of thing).

    Another benefit, is it exercises and tests spin2cpp :)
    ersmith wrote: »
    The drawback is that we kind of end up with two implementations, the initial bootstrap one (spin2cpp) and then the final one (Spin 2).
    ersmith wrote: »
    Also, we'd have to support multiple code generators if we wanted the "native" Spin 2 to support x86.

    Maybe Intel will finally get it right, on one of their small 'Embedded Controller revisit' experiments, and then a Spin 2 to native x86 could have appeal.
    In the meantime it seems a Byte-code base, as the complement to native P2-Binary, seems to make the most sense ?

    If you want a compact and clean language that can self build, and maybe even self-host, and make compact systems, then perhaps it is time to look again at Project Oberon.
    The code for that is done, and it looks to need about P2-level resource.
    http://www.projectoberon.com/
    http://people.inf.ethz.ch/wirth/ProjectOberon/index.html
    https://github.com/dcwbrown/O7

    http://pascal.hansotten.com/category/project-oberon/
    - above link has Project Oberon emulators

  • ElectrodudeElectrodude Posts: 1,660
    edited 2017-02-22 23:54
    jmg wrote: »
    Has the idea of writing the official Spin 2 compiler in Spin 2 for x86 been discussed properly yet? If a least-common-denominator version of Spin 2 that can run on things other than the Propeller is going to be written, then it should be self hosting. What would be the best way to bootstrap such a thing?
    The self-hosting idea has merit, but as you have seen above, the mention of 'for x86' generates a reflex reaction.
    Another means to self-host, is to have Spin2 emit Byte Codes, such as WebAssembly, or asm.js as possible examples.
    That would take longer, but it does bring portable to the table, and it would mean Spin could code for anything that supported those byte codes.

    I'm under the impression that the problem is not "for x86", but rather "exclusively for x86". I think it should be intially for x86, because that's what most people use. However, being Spin, it would of course also work on the Propeller. Also, it could also be made to work on, say, RISC V (if it ever catches on) or ARM.

    But, yes, it would probably be best to have the compiler target mainly PNUT2 bytecodes, and distribute a PNUT2 interpreter in PASM2 and C. For popular targets such as x86 and Propeller, a JIT compiler might be good. I don't know anything about WebAssembly for asm.js, but I assume it doesn't let you use real pointers (for security), so it probably wouldn't be practical to use for this.
  • cgracey wrote: »
    What about floats?

    In my experience, single-precision (32 bits) is a little paltry, having only 24 ('1'+23) bits of mantissa. That's about 64ppb accuracy.

    Double-precision, on the other hand, is plenty rich, but 64 bits.

    Would it be too weird to have whole+fraction fixed-point types of 16+16 bits and 32+32 bits? They could be added and subtracted directly, without unpacking. Multiplying and dividing would be direct, too, but require result shifting, which is no big deal. Maybe even having settable-point types, like 8.24 bits, would be good. Their 'whole.fraction' issue would only come up when shifting multiplication and division results.

    Yes, if any types are added at all, please, please, add fixed point numbers!
  • cgraceycgracey Posts: 14,206
    edited 2017-02-23 00:02
    ersmith wrote: »
    cgracey wrote: »
    I wish I had a vision of how we could all collaborate to define the language. It really doesn't need to be that complicated, but parsing all the ideas seems complicated, at the moment. Any of us could go make our own whole language as quickly as we could work together, maybe even quicker. Do you think it's likely, though, that we could openly collaborate on the language definition and get something that's more robust and could be built easily?
    That sounds like a worthy goal, and I suspect that at least the braod outlines of the language could be hashed out that way.
    These are a few things that are on my mind:

    - maybe floating point support built in, somehow
    - structures, maybe
    - a base Spin language that is not overly tied to the hardware. See the instruction spreadsheet with the Spin procedures for PASM instructions - they provide a bridge to the actual instructions without Spin needing to absorb and repackage everything. This lightens up Spin and encourages PASM programming.
    - native code output, at least as an option
    - method pointers
    - methods can return any number of parameters
    - dead code removal (at least unused methods)
    I like most of these. I'd make the following suggestions:

    - just a few types: int, float, ptr, object, generic (matches any other types). Method variables can be sized (byte, word, long) but locals are always long
    - types are inferred by the compiler whenever possible; users can explicitly define them if they want, but they shouldn't have to in most cases
    - no need for structures if objects are first class citizens (a structure is just an object that doesn't happen to have methods)
    - I think I'd prefer to pass object pointers rather than method pointers
    - multiple return values would make a lot of things easier, but what do we allow to be done with them? plain assignment is easy, but should we allow:
      [x,y] += func2(a, b)
    
    when func2 returns two values?

    (I used square brackets for multiple values rather than round because I think it would make parsing easier.)
    Really, Spin2 just needs to be about scope, math/logic, and flow control. It doesn't need to deal with the streamer or anything too specific. It just needs to be a framework for writing code and incorporating objects.

    I think that makes a lot of sense.

    How important is backwards compatibility? My inclination would be to keep the syntax of Spin and make it pretty much a subset of Spin2.

    Eric

    About this:
      [x,y] += func2(a, b)
    
    Maybe we just shouldn't go there.

    In cases where two values are produced, they could be handled like this:
      ROTATE(x, y, angle : newxvar, newyvar)
    

    I don't think that strict backwards compatibility is that important. Seems like it would just set the stage for false hopes and frustration.

    I kind of like fixed-point math that I mentioned in a post above. No special add/subtract awareness, just result shifting after multiply/divide:
      q := SDIV(a, b, resultshift)
      m := SMUL(a, b, resultshift)
    

    And maybe we could change ":=" to just "=".
  • Fixed point seconded.
  • cgraceycgracey Posts: 14,206
    cgracey wrote: »
    What about floats?

    In my experience, single-precision (32 bits) is a little paltry, having only 24 ('1'+23) bits of mantissa. That's about 64ppb accuracy.

    Double-precision, on the other hand, is plenty rich, but 64 bits.

    Would it be too weird to have whole+fraction fixed-point types of 16+16 bits and 32+32 bits? They could be added and subtracted directly, without unpacking. Multiplying and dividing would be direct, too, but require result shifting, which is no big deal. Maybe even having settable-point types, like 8.24 bits, would be good. Their 'whole.fraction' issue would only come up when shifting multiplication and division results.

    Yes, if any types are added at all, please, please, add fixed point numbers!

    The beauty of fixed-point is that it's not a type! It's just how you use it. There just needs to be special multiply and divide functions to shift the results.
  • Heater.Heater. Posts: 21,230
    jmg,
    Maybe Intel will finally get it right, on one of their small 'Embedded Controller revisit' experiments, and then a Spin 2 to native x86 could have appeal.
    Maybe.

    When Intel has done that before it was not x86. For example: https://en.wikipedia.org/wiki/Intel_i960

    Intel still has an ARM license. And the last news I read was that they were about to make use of it again.

  • kwinnkwinn Posts: 8,697
    edited 2017-02-23 00:19
    cgracey wrote: »
    ........
    .............

    And maybe we could change ":=" to just "=".

    Wonderful. Eliminates one of my two most common mistakes. Now how about NOT using "#" to indicate the most commonly used addressing mode in PASM.
  • cgraceycgracey Posts: 14,206
    kwinn wrote: »
    cgracey wrote: »
    ........
    .............

    And maybe we could change ":=" to just "=".

    Wonderful. Eliminates one of my two most common mistakes. Now how about NOT using "#" to indicate the most commonly used addressing mode in PASM.

    I've thought about that. No # means literal and [reg] means register. Just for address operands. Is that rule complete, or would there be exceptions? I'm looking into it...
  • cgraceycgracey Posts: 14,206
    kwinn wrote: »
    cgracey wrote: »
    ........
    .............

    And maybe we could change ":=" to just "=".

    Wonderful. Eliminates one of my two most common mistakes. Now how about NOT using "#" to indicate the most commonly used addressing mode in PASM.

    It looks doable, all right.

    How would the rest of you feel about PASM addresses being expressed as such:
      JMP label         (was 'JMP #label')
      JMP [reg]         (was 'JMP reg')
    

    This would introduce ripple into the literal-vs-register syntax for all branches (and the LOC instruction), but would prevent common mistakes that I even make sometimes.

    I could make this change in the next release, v16. Yea or nay?
  • cgracey wrote: »
    How would the rest of you feel about PASM addresses being expressed as such:
      JMP label         (was 'JMP #label')
      JMP [reg]         (was 'JMP reg')
    

    This would introduce ripple into the literal-vs-register syntax for all branches (and the LOC instruction), but would prevent common mistakes that I even make sometimes.

    I could make this change in the next release, v16. Yea or nay?
    That's a 'yes' :)

  • jmgjmg Posts: 15,175
    cgracey wrote: »
    What about floats?

    In my experience, single-precision (32 bits) is a little paltry, having only 24 ('1'+23) bits of mantissa. That's about 64ppb accuracy.

    Double-precision, on the other hand, is plenty rich, but 64 bits.

    Would it be too weird to have whole+fraction fixed-point types of 16+16 bits and 32+32 bits? They could be added and subtracted directly, without unpacking. Multiplying and dividing would be direct, too, but require result shifting, which is no big deal. Maybe even having settable-point types, like 8.24 bits, would be good. Their 'whole.fraction' issue would only come up when shifting multiplication and division results.

    Yes, certainly the lower precision of 32b float is often a pain, but moving away from standard format, would be an issue.
    One you have gone > 32b, what is the issue of being simply f64 ?

    If you want more radical format, there is also this work

    http://forums.parallax.com/discussion/166008/john-gustafson-presents-beyond-floating-point-next-generation-computer-arithmetic

    Doing that would 'get you noticed' more :)


    The WebAssembly I linked to before, has i32, i64 and f32 and f64 types, & I think also u8,u16,u32

  • jmgjmg Posts: 15,175
    cgracey wrote: »
    kwinn wrote: »
    cgracey wrote: »
    ........
    .............

    And maybe we could change ":=" to just "=".

    Wonderful. Eliminates one of my two most common mistakes. Now how about NOT using "#" to indicate the most commonly used addressing mode in PASM.

    It looks doable, all right.

    How would the rest of you feel about PASM addresses being expressed as such:
      JMP label         (was 'JMP #label')
      JMP [reg]         (was 'JMP reg')
    

    This would introduce ripple into the literal-vs-register syntax for all branches (and the LOC instruction), but would prevent common mistakes that I even make sometimes.

    I could make this change in the next release, v16. Yea or nay?

    I would be YES on that, as that is one of my main peeves with PASM, and what sets it apart most from others fasmg et al.

    Because labels outnumber constants significantly in most ASM, code is clearer without the #label, and # means absolute value, which is not quite true in relative jump cases.... made sort-of sense in P1 days.

    Some asms use JMP @Reg, but JMP [Reg] is similar enough to be scan-read.

    You could have an ASM switch, that deprecates 'old' syntax, but can allow it ?


  • jmgjmg Posts: 15,175
    cgracey wrote: »
    And maybe we could change ":=" to just "=".
    Languages that use ':=', usually have other uses for "=", but I think C's solution to that, of "==" is common enough, that most can scan read it easily enough.
    Likewise, C's line comment "//" is now almost universal.

  • jmgjmg Posts: 15,175
    ... I don't know anything about WebAssembly or asm.js, but I assume it doesn't let you use real pointers (for security), so it probably wouldn't be practical to use for this.

    the wiki link for asm.js, has an example to 'calculate the length of a string'
  • Heater. wrote: »
    Certainly a recursive decent parser can use an endless amount of stack. When the code to be compiled makes a call, which makes a call, which makes a call....
    The current stack frame could be in cog registers, but once a call occurs, it is sent to hub via SETQ+WRLONG and the new frame is now in the cog registers.
    Doesn't that break things when the top level function passes a pointer/reference to lower level functions, to lower level functions....

    How is that pointer valid at the bottom of the pile?

    That's the similar to the issue we discussed elsewhere of taking the address of a local variable or parameter and passing it to another COG although in this case it would cause a problem even on the same COG.
  • Chip,
    I like fixed point for sure, but I think you need to support proper floats because they are standard. If you don't build it in, then someone will have to port over the float libraries from P1, and then we are back to the ugly syntax of using them.

    I also like the change for jmp/branches with no # and [reg].
  • cgracey wrote: »
    kwinn wrote: »
    cgracey wrote: »
    ........
    .............

    And maybe we could change ":=" to just "=".

    Wonderful. Eliminates one of my two most common mistakes. Now how about NOT using "#" to indicate the most commonly used addressing mode in PASM.

    It looks doable, all right.

    How would the rest of you feel about PASM addresses being expressed as such:
      JMP label         (was 'JMP #label')
      JMP [reg]         (was 'JMP reg')
    

    This would introduce ripple into the literal-vs-register syntax for all branches (and the LOC instruction), but would prevent common mistakes that I even make sometimes.

    I could make this change in the next release, v16. Yea or nay?

    +1
  • Cluso99Cluso99 Posts: 18,069
    David Betz wrote:
    Anyway, why couldn't a compiler be written in Spin?
    It's been done: http://www.sphinxcompiler.com/ despite Spin's rather weak string primitives.

    -Phil
    And I have it running (but little testing) on mu Prop OS. So it now supports Kye's FAT16/32. You can even display the intermediate compiler output files from my OS.

    The only part missing from my OS is an editor !
Sign In or Register to comment.