Shop OBEX P1 Docs P2 Docs Learn Events
Why...O...why...in this day and age... - Page 5 — Parallax Forums

Why...O...why...in this day and age...

123578

Comments

  • JasonDorie wrote: »
    I assume "formal syntax" just means that there's a proper spec for what the language syntax follows and definitive rules for how to parse it without ambiguity.

    Context-free grammar goes even further, and is a definitive set of rules whereby you can clearly decide what a given token is (or isn't) based on past context and a single future token, without having to have "compiled" what you've seen so far. It makes it easier to compile as you can build a finite state machine from the syntax automatically (this is what Yacc / Lex do, rather unreadably).

    C is context free, C++ isn't, though I can't remember exactly why.
    Templates maybe?

  • Heater.Heater. Posts: 21,230
    The_Master,

    There is a lot of interesting discussion here that I have not absorbed fully but this stood out:
    Maybe adding Linux to my Elev-8

    Does anyone know if there is enough weight capacity on the Elev-8 for a hard drive? Maybe a networking card too?
    Did you know that almost every modern smart phone runs Linux? Yes including your iPhone.

    They all use a GSM radio modem subsystem made by Qualcom. That tiny radio subsystem runs Linux.

    So yes, there is no reason why an Elev-8 cannot run Linux.

    Or there is this tiny Linux running WIFI module:
    https://www.kickstarter.com/projects/onion/omega2-5-iot-computer-with-wi-fi-powered-by-linux

    Which I am waiting delivery of as we speak.

  • jmgjmg Posts: 15,173
    edited 2017-01-07 20:40
    David Betz wrote: »
    cgracey wrote: »
    ..
    My thought has been to get Spin2 going on my current platform (x86) and then get it working in Spin, itself, making the final departure from x86.
    That sounds like a good plan. It would be nice to be able to run the same version of Spin self-hosted on the P2 as we run on the PC under Windows, Mac, and Linux. Do you plan to write two code generators, one for x86 and one for P2?
    The danger of this pathway, is that x86 excludes ARM, which means the whole RaspPi ecosystem.

    To me it is smarter to first define exactly what Spin we are talking about.
    Here, I believe Chip intends to do a P2_Spin -> P2.SpinByteCodes

    A P2_Spin -> x86, P2_Spin -> ARM would be better handled as P2_spin upgrade of Spin2cpp, surely ?
    Note that Spin2cpp can already 'compile to a PC binary(x86/ARM)' via C.
    (ie add whatever extra fruit is decided, and compile via C from there )

    Another path could be a P2 simulator, coded in C, that would allow those P2.SpinByteCodes to run on x86/ARM/etc, but that has another possible issue :

    Re self hosting - The Sphinx link above says this :
    "Sphinx
    Sphinx is a Spin compiler—written in Spin—that runs on the Parallax Propeller. Although memory constraints prevent Sphinx from compiling the full gamut of Spin programs (see Limitations below), it is not a toy compiler.
    "
    Those memory constraints will still exist on a P2, or P2-emulated.
    P2 has more memory than P1, but the memory ratio is still the same.

    It may be that P2.SpinByteCodes, when PC-emulated, can access a larger virtual memory than physical P2 ?
  • jmg wrote: »
    David Betz wrote: »
    cgracey wrote: »
    ..
    My thought has been to get Spin2 going on my current platform (x86) and then get it working in Spin, itself, making the final departure from x86.
    That sounds like a good plan. It would be nice to be able to run the same version of Spin self-hosted on the P2 as we run on the PC under Windows, Mac, and Linux. Do you plan to write two code generators, one for x86 and one for P2?
    The danger of this pathway, is that x86 excludes ARM, which means the whole RaspPi ecosystem.

    To me it is smarter to first define exactly what Spin we are talking about.
    Here, I believe Chip intends to do a P2_Spin -> P2.SpinByteCodes

    A P2_Spin -> x86, P2_Spin -> ARM would be better handled as P2_spin upgrade of Spin2cpp, surely ?
    (ie add whatever extra fruit is decided, and compile via C from there )

    Another path could be a P2 simulator, coded in C, that would allow those P2.SpinByteCodes to run on x86/ARM/etc, but that has another possible issue :

    Re self hosting - The Sphinx link above says this :
    "Sphinx
    Sphinx is a Spin compiler—written in Spin—that runs on the Parallax Propeller. Although memory constraints prevent Sphinx from compiling the full gamut of Spin programs (see Limitations below), it is not a toy compiler.
    "
    Those memory constraints will still exist on a P2, or P2-emulated.
    P2 has more memory than P1, but the memory ratio is still the same.

    It may be that P2.SpinByteCodes, when PC-emulated, can access a larger virtual memory than physical P2 ?
    True. We probably don't want to leave out ARM. Anyway, I agree that spin2cpp would be a good basis for the official Spin compiler for P2. I tried to argue that several years ago to no effect. I was just happy that Chip was interested in using a high level language (Spin2) instead of x86 assembly. Maybe Spin2 will be improved (adding structures for instance) just to make it easier to implement a compiler.

  • jmgjmg Posts: 15,173
    edited 2017-01-07 21:01
    cgracey wrote: »
    So, how do you formalize syntax? I understand that this would simplify parsing and allow a lot of standardized tools to play roles within the compiler. What I fear is that it would mandate largish text constructs in little corners of the language where maybe a colon character could have been used.

    Perhaps look at existing languages definitions ?
    One good example choice would be Oberon, not because it it close to Spin, but because it has been exactly defined in this formal way, by experts.
    ( in extended Backus-Naur Formalism called EBNF )

    https://cseweb.ucsd.edu/~wgg/CSE131B/oberon2.htm
    and in pdf, more modern
    https://www.inf.ethz.ch/personal/wirth/Oberon/Oberon07.Report.pdf
    (see Appendix The Syntax of Oberon )

  • jmgjmg Posts: 15,173
    David Betz wrote: »
    ... I was just happy that Chip was interested in using a high level language (Spin2) instead of x86 assembly.

    :) Agreed.
    David Betz wrote: »
    Maybe Spin2 will be improved (adding structures for instance) just to make it easier to implement a compiler.
    Given the larger P2 memory, extensions in this direction would be expected.

    They probably can be quite small in number, to get a workable P2_Spin revision of Spin2cpp, for compiler-work.

    I guess a native 32b type is going to be enough, for compilers, as that allows 4GB memory images ?

  • jmg wrote: »
    Re self hosting - The Sphinx link above says this :
    "Sphinx
    Sphinx is a Spin compiler—written in Spin—that runs on the Parallax Propeller. Although memory constraints prevent Sphinx from compiling the full gamut of Spin programs (see Limitations below), it is not a toy compiler.
    "
    Those memory constraints will still exist on a P2, or P2-emulated.
    P2 has more memory than P1, but the memory ratio is still the same.
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.
  • Heater.Heater. Posts: 21,230
    edited 2017-01-07 20:55
    cgracey,
    So, how do you formalize syntax?
    Back in the late 1950's / early 1960's there came the language, ALGOL. Structurally very much like Spin. But without the idea of objects.

    Even if nobody uses ALGOL anymore it is the mother of most languages we use today, Pascal, Ada, C, Javascript, Python, etc. One of the great things to come out of it's development was a the idea of a formal definition of it's syntax. The so called Backus–Naur form.
    https://en.wikipedia.org/wiki/Backus–Naur_form

    With such a formal description of the language anyone who wants to implement the thing has a head start. And a means to know if they got it right or not.

    With such a formalism in place it would not be necessary for Eric to reverse engineer the x86 of the original Spin compiler in order to make a C++ version of it. For example.

    Of course we than have the semantics to worry about. The actual meaning of the code, what it should do, rather than the rules of the allowed symbol sequences in the source. For example "2 + 2" means add 2 and 2 rather than subtract or whatever. As far as I know there is no formalism of that.



  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2017-01-07 21:09
    cgracey wrote:
    So, how do you formalize syntax?
    One way is with a production language like BNF (Backus-Naur Form). BNF can be used to define a context-free language. Here's an example of what the BNF for a portion of Spin might look like:
    <spin program> ::= <section>* <pub section> <section>*
    <section> ::= <con section> | <var section> | <obj section> | <pub section> | <pri section> | <dat section>
    <con section> ::= "CON\n" <con definition>*
    <con definition> ::= <constant symbol> "=" <constant expression> "\n" | "#" <constant expression> "," <constant symbol list> "\n"
    <constant symbol list> ::= <constant symbol> | <constant symbol list> "," <constant symbol>
    <constant symbol> ::= <symbol>
    <var section> ::= "VAR\n" <var definition>*
    <var definition> ::= <type> " "+ <variable list>
    <type> ::= "LONG" | " WORD" | "BYTE"
    <variable list> ::= <variable> | <variable list> "," <variable>
    <variable> ::= <symbol> | <symbol> "[" <constant expression> "]"
    <constant expression> ::= <expression>
    <expression> ::= <and expression> | <expression> "OR" < and expression>
    <and expression> :: = <not expression> | <and expression> "AND" <not expression>
    <not expression> ::= <compare expression> | NOT <compare expression>
    <compare expression> ::= <minmax expression> | <compare expression> <compare op> <minmax expression>
    <compare op> ::= "<" | ">" | "<>" | "==" | "=<" | "=>"
    <minmax expression> := <add expression> | <minmax expression> <minmax op> <add expression>
    <minmax op> ::= "#>" | "<#"
    <add expression> ::= 
    ...
    <symbol> ::= <alpha> <alphanumeric>*
    <alpha> ::= "A" | "a" | "B" | "b" | ... | "Z" | "z" | "_"
    <alphanumeric> ::= <alpha> | "0" | "1" | ... | "9"
    
    Each line represents a production rule, including a named element on the left, followed by a "::=" and an OR "|" list of what can constitute that element. Notice that productions can be self-referential (i.e. recursive). To save space, I've left out rules pertaining to non-essential whitespace. I've also resorted to regular expression syntax for repeated elements: suffix "*" for "zero or more" and suffix "+" for "one or more."

    In the definition for <expression>, you will notice that the operator precedence rules automatically derive from the subsequent production definitions.

    To save space, I've glossed over the differences between a <constant expression> and just an <expression>. This could be handled in the compiler, where it checks to make sure each term in the <expression> is a constant.

    Also, not every part of this grammar would be handled in the recursive-descent phase of compilation. Some, like the definition for <symbol> would be taken care of in the scanner that comes before the compile phase.

    A correct grammar must pruduce all allowable statements in the target language and exclude all those that are not allowable. Also, for each allowable statement, there must be one and only one way to get there through the grammar; otherwise, the language will be ambiguous. Needless to say, a complete grammar for Spin would be very large. But once you have it, I believe that it will make it easier for you -- or especially for someone else -- to write a compiler for it.

    -Phil
  • jmgjmg Posts: 15,173
    edited 2017-01-07 21:01
    Dave Hein wrote: »
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.

    Of course, but P2 users are also going to expect to create much larger programs, hence my mention of memory ratio. (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
  • Heater.Heater. Posts: 21,230
    I started out to write a Spin syntax definition in pegjs. As far as I can tell pegjs allows one to specify a syntax as well as BNF but is much easier to read.

    I gave up. Unlike Pascal and such you cannot use simple recursive production rules to parse Spin.

    Spin has these PUB, VAR etc, blocks to deal with. It uses white space block delimiting. Everything that fights against recursive descent parsing.

    Or, maybe I just missed a point.



  • jmg wrote: »
    David Betz wrote: »
    ... I was just happy that Chip was interested in using a high level language (Spin2) instead of x86 assembly.

    :) Agreed.
    David Betz wrote: »
    Maybe Spin2 will be improved (adding structures for instance) just to make it easier to implement a compiler.
    Given the larger P2 memory, extensions in this direction would be expected.

    They probably can be quite small in number, to get a workable P2_Spin revision of Spin2cpp, for compiler-work.

    I guess a native 32b type is going to be enough, for compilers, as that allows 4GB memory images ?
    I don't think larger memory would be required for supporting features like structures. The P1 compiler runs on a PC so it isn't memory constrained and the runtime support for structures is minimal.

  • jmg wrote: »
    P2 has more memory than P1, but the memory ratio is still the same.
    I don't understand what you mean by "memory ratio".

  • jmgjmg Posts: 15,173
    Heater. wrote: »
    I started out to write a Spin syntax definition in pegjs. As far as I can tell pegjs allows one to specify a syntax as well as BNF but is much easier to read.

    I gave up. Unlike Pascal and such you cannot use simple recursive production rules to parse Spin.

    Spin has these PUB, VAR etc, blocks to deal with. It uses white space block delimiting. Everything that fights against recursive descent parsing.

    Or, maybe I just missed a point.
    Hmm... If a workable solution is elusive, maybe another approach could be to have one parser, where that 'source becomes the definition', and that parser is used for all of these


    P2_Spin -> P2.SpinByteCodes (what Chip wants to do)
    and
    P2_Spin -> C source -> Any C host (already in Spin2cpp)
    and
    P2_Spin -> P2_ASM (already in Spin2cpp)
    and
    P2_Spin -> P1_ASM (already in Spin2cpp)

  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2017-01-07 21:17
    heater wrote:
    I gave up. Unlike Pascal and such you cannot use simple recursive production rules to parse Spin.
    I'm not sure that's true, Heater. I think that you could define a <code block> and just let the scanner sniff them out prior to parsing and add pre- and post-delimiters that the parser would recognize. Also, in my example, there's no provision for comments. Again, these would be removed during the scan phase, so they don't get in the way of the parser.

    -Phil
  • jmg wrote: »
    P2 has more memory than P1, but the memory ratio is still the same.
    I don't understand what you mean by "memory ratio".
    Heater. wrote: »
    I started out to write a Spin syntax definition in pegjs. As far as I can tell pegjs allows one to specify a syntax as well as BNF but is much easier to read.

    I gave up. Unlike Pascal and such you cannot use simple recursive production rules to parse Spin.

    Spin has these PUB, VAR etc, blocks to deal with. It uses white space block delimiting. Everything that fights against recursive descent parsing.

    Or, maybe I just missed a point.


    I believe that Eric wrote spin2cpp using either YACC or Bison. Both would require a BNF-like syntax. You might look at what he did.
  • jmg wrote: »
    Heater. wrote: »
    I started out to write a Spin syntax definition in pegjs. As far as I can tell pegjs allows one to specify a syntax as well as BNF but is much easier to read.

    I gave up. Unlike Pascal and such you cannot use simple recursive production rules to parse Spin.

    Spin has these PUB, VAR etc, blocks to deal with. It uses white space block delimiting. Everything that fights against recursive descent parsing.

    Or, maybe I just missed a point.
    Hmm... If a workable solution is elusive, maybe another approach could be to have one parser, where that 'source becomes the definition', and that parser is used for all of these


    P2_Spin -> P2.SpinByteCodes (what Chip wants to do)
    and
    P2_Spin -> C source -> Any C host (already in Spin2cpp)
    and
    P2_Spin -> P2_ASM (already in Spin2cpp)
    and
    P2_Spin -> P1_ASM (already in Spin2cpp)
    Whether all of these really exist in spin2cpp depends on what Chip plans to change in the move from Spin -> Spin2. If Spin2 is just Spin with some additions then it should be relatively easy to add those to spin2cpp. I guess we need to know his plans before we know if spin2cpp will be a good basis for a Spin2 compiler.

  • Heater. wrote: »
    I started out to write a Spin syntax definition in pegjs. As far as I can tell pegjs allows one to specify a syntax as well as BNF but is much easier to read.

    I gave up. Unlike Pascal and such you cannot use simple recursive production rules to parse Spin.

    Spin has these PUB, VAR etc, blocks to deal with. It uses white space block delimiting. Everything that fights against recursive descent parsing.

    Or, maybe I just missed a point.


    I think Eric just had the lexer add tokens indicating "indent in" and "indent out" and then let the BNF grammer be defined in terms of those tokens.

  • jmgjmg Posts: 15,173
    David Betz wrote: »
    jmg wrote: »
    P2 has more memory than P1, but the memory ratio is still the same.
    I don't understand what you mean by "memory ratio".

    See above where I expand that as (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    David Betz wrote: »
    Whether all of these really exist in spin2cpp depends on what Chip plans to change in the move from Spin -> Spin2. If Spin2 is just Spin with some additions then it should be relatively easy to add those to spin2cpp. I guess we need to know his plans before we know if spin2cpp will be a good basis for a Spin2 compiler.
    Of course, but the assumption is Chip will want to leverage all the Spin code for P1, and so make P2_Spin a sensible superset, and certainly avoid any conflicts.

    Out of this may even come a P1_SpinB, that can be used on P1.
    Certainly once x86 ASM is removed, upgrades are easier.
    Another possible step would be a P1_SpinC, where the ROM is also upgraded.

  • jmg wrote: »
    David Betz wrote: »
    jmg wrote: »
    P2 has more memory than P1, but the memory ratio is still the same.
    I don't understand what you mean by "memory ratio".

    See above where I expand that as (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    Why does that ratio matter if you have some sort of external storage like an SD card that can be used for temporary storage during compilation?
    David Betz wrote: »
    Whether all of these really exist in spin2cpp depends on what Chip plans to change in the move from Spin -> Spin2. If Spin2 is just Spin with some additions then it should be relatively easy to add those to spin2cpp. I guess we need to know his plans before we know if spin2cpp will be a good basis for a Spin2 compiler.
    Of course, but the assumption is Chip will want to leverage all the Spin code for P1, and so make P2_Spin a sensible superset, and certainly avoid any conflicts.

    Out of this may even come a P1_SpinB, that can be used on P1.
    Certainly once x86 ASM is removed, upgrades are easier.
    Another possible step would be a P1_SpinC, where the ROM is also upgraded.
    At one point I think Chip mentioned using the hardware stack in Spin but that would limit the depth of nested calls significantly and probably break much of what is in OBEX. Maybe he's no longer planning that though.

  • Cluso99Cluso99 Posts: 18,069
    Michael Park wrote a spin compiler called "homespun" around the same time Brad Campbell wrote "bst". Both were spin identical, with some extra #define options. One or both had dead code removal, and IIRC there were even code optimisation options.

    Mark took his compiler further, and rewrote it in spin, and as a result it runs on P1 (see "Sphinx"). I have converted this to run under my Prop OS, but it's not been fully tested and doesn't use the full hub memory due to some debugging code left in. I have added OS commands to display the intermediate code outputs.
  • jmgjmg Posts: 15,173
    edited 2017-01-07 23:10
    David Betz wrote: »
    Why does that ratio matter if you have some sort of external storage like an SD card that can be used for temporary storage during compilation?
    Well, yes, but that paging to external memory is going to be very slow, and you now dictate every system must have a conforming SD card. Maybe that system-level spec will be tolerable to those really wanting 'self-hosting'.
    To me, it would be too compromised to be worthwhile.

    David Betz wrote: »
    At one point I think Chip mentioned using the hardware stack in Spin but that would limit the depth of nested calls significantly and probably break much of what is in OBEX. Maybe he's no longer planning that though.
    Given there are Spin Compilers already, it seems to make little sense to break compatibility, for what will be a modest speed gain in a token-based version ?

  • jmg wrote: »
    David Betz wrote: »
    Why does that ratio matter if you have some sort of external storage like an SD card that can be used for temporary storage during compilation?
    Well, yes, but that paging to external memory is going to be very slow, and you now dictate every system must have a conforming SD card. Maybe that system-level spec will be tolerable to those really wanting 'self-hosting'.
    To me, it would be too compromised to be worthwhile.
    Well, you can always use hub memory as temporary storage but you won't be able to compile an image that fills all of hub memory that way of course.

    David Betz wrote: »
    At one point I think Chip mentioned using the hardware stack in Spin but that would limit the depth of nested calls significantly and probably break much of what is in OBEX. Maybe he's no longer planning that though.
    Given there are Spin Compilers already, it seems to make little sense to break compatibility, for what will be a modest speed gain in a token-based version ?
    Maybe I misunderstood and he was going to use the hardware stack in the implementation of the Spin bytecode VM. That would make more sense I guess.

  • jmg wrote: »
    Dave Hein wrote: »
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.

    Of course, but P2 users are also going to expect to create much larger programs, hence my mention of memory ratio. (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    The Sphinx Spin compiler worked on single objects at a time, and then linked them all together with a separate linker. So the available RAM limited the size of the object, and not so much the size of the whole program. With 512K of RAM a compiler should be able to handle a fairly large object. Of course it will have limits, but I would guess that most objects could be compiled with 512K available.
  • cgraceycgracey Posts: 14,155
    Dave Hein wrote: »
    jmg wrote: »
    Dave Hein wrote: »
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.

    Of course, but P2 users are also going to expect to create much larger programs, hence my mention of memory ratio. (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    The Sphinx Spin compiler worked on single objects at a time, and then linked them all together with a separate linker. So the available RAM limited the size of the object, and not so much the size of the whole program. With 512K of RAM a compiler should be able to handle a fairly large object. Of course it will have limits, but I would guess that most objects could be compiled with 512K available.

    Just add a HyperRAM and you've got enough memory to compile anything, by whatever methodology you'd want.
  • jmgjmg Posts: 15,173
    cgracey wrote: »
    Just add a HyperRAM and you've got enough memory to compile anything, by whatever methodology you'd want.

    Has anyone got HyperRAM working on P1 or P2 yet ?
    I'm rather hoping the next iteration of these has a slightly less onerous timing spec around refresh, and will allow a scan line of video between refresh.
  • cgracey wrote: »
    Dave Hein wrote: »
    jmg wrote: »
    Dave Hein wrote: »
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.

    Of course, but P2 users are also going to expect to create much larger programs, hence my mention of memory ratio. (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    The Sphinx Spin compiler worked on single objects at a time, and then linked them all together with a separate linker. So the available RAM limited the size of the object, and not so much the size of the whole program. With 512K of RAM a compiler should be able to handle a fairly large object. Of course it will have limits, but I would guess that most objects could be compiled with 512K available.

    Just add a HyperRAM and you've got enough memory to compile anything, by whatever methodology you'd want.
    So the idea would be to use the HyperRAM as a RAM-disk for temporary files? How would that be better than using an SD card? Would it be faster?

  • cgraceycgracey Posts: 14,155
    David Betz wrote: »
    cgracey wrote: »
    Dave Hein wrote: »
    jmg wrote: »
    Dave Hein wrote: »
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.

    Of course, but P2 users are also going to expect to create much larger programs, hence my mention of memory ratio. (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    The Sphinx Spin compiler worked on single objects at a time, and then linked them all together with a separate linker. So the available RAM limited the size of the object, and not so much the size of the whole program. With 512K of RAM a compiler should be able to handle a fairly large object. Of course it will have limits, but I would guess that most objects could be compiled with 512K available.

    Just add a HyperRAM and you've got enough memory to compile anything, by whatever methodology you'd want.
    So the idea would be to use the HyperRAM as a RAM-disk for temporary files? How would that be better than using an SD card? Would it be faster?

    You wouldn't have to read and write files to perform a compilation. You could do it all in memory, maybe addressed, or using it as a RAM-disk. I think it would be a lot faster, and no write-wear.
  • cgracey wrote: »
    David Betz wrote: »
    cgracey wrote: »
    Dave Hein wrote: »
    jmg wrote: »
    Dave Hein wrote: »
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.

    Of course, but P2 users are also going to expect to create much larger programs, hence my mention of memory ratio. (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    The Sphinx Spin compiler worked on single objects at a time, and then linked them all together with a separate linker. So the available RAM limited the size of the object, and not so much the size of the whole program. With 512K of RAM a compiler should be able to handle a fairly large object. Of course it will have limits, but I would guess that most objects could be compiled with 512K available.

    Just add a HyperRAM and you've got enough memory to compile anything, by whatever methodology you'd want.
    So the idea would be to use the HyperRAM as a RAM-disk for temporary files? How would that be better than using an SD card? Would it be faster?

    You wouldn't have to read and write files to perform a compilation. You could do it all in memory, maybe addressed, or using it as a RAM-disk. I think it would be a lot faster, and no write-wear.
    Are you saying there would be a way to map the HyperRAM into hub address space and use it directly? How would that work?

  • cgraceycgracey Posts: 14,155
    David Betz wrote: »
    cgracey wrote: »
    David Betz wrote: »
    cgracey wrote: »
    Dave Hein wrote: »
    jmg wrote: »
    Dave Hein wrote: »
    Sphinx was limited by the 32K of hub RAM. P2 will have 512K of RAM, which is 16 times what P1 has.

    Of course, but P2 users are also going to expect to create much larger programs, hence my mention of memory ratio. (the ratio between largest possible binary size, and max available (practical) compiler memory)
    That ratio on P2 remains at 1:1, exactly the same as P1.
    The Sphinx Spin compiler worked on single objects at a time, and then linked them all together with a separate linker. So the available RAM limited the size of the object, and not so much the size of the whole program. With 512K of RAM a compiler should be able to handle a fairly large object. Of course it will have limits, but I would guess that most objects could be compiled with 512K available.

    Just add a HyperRAM and you've got enough memory to compile anything, by whatever methodology you'd want.
    So the idea would be to use the HyperRAM as a RAM-disk for temporary files? How would that be better than using an SD card? Would it be faster?

    You wouldn't have to read and write files to perform a compilation. You could do it all in memory, maybe addressed, or using it as a RAM-disk. I think it would be a lot faster, and no write-wear.
    Are you saying there would be a way to map the HyperRAM into hub address space and use it directly? How would that work?

    I mean that you could keep track of where you put things into it and retrieve them whenever needed.
Sign In or Register to comment.