Shop OBEX P1 Docs P2 Docs Learn Events
Prop-2 Release Date and Price - Page 17 — Parallax Forums

Prop-2 Release Date and Price

11415171920

Comments

  • jmgjmg Posts: 15,173
    edited 2015-07-13 05:37
    To be clear, in the hardware event system I described I was thinking that only one event of a particular source, e.g. a particular pin change, would be registered. If another edge occurs before the last one was handled it would be lost.


    'Lost' ?   That's a term that needs care.
    'Deferred for later action' is ok, but completely lost, that's rather less tolerable.
  • Heater.Heater. Posts: 21,230
    edited 2015-07-13 09:36
    jmg,

    'Lost' ? That's a term that needs care.
    'Deferred for later action' is ok, but completely lost, that's rather less tolerable.

    No, it is not intolerable. It is a fact of life. 
    A system with interrupts can miss incoming events when they arrive faster than you can handle them. A UART can over flow it's buffer and drop bytes if you can't deal with them fast enough, etc, etc, etc. You have to design your system around the required input rates. 
    If input rates are an issue you need to take drastic action, get a faster machine, add another processor, build some custom logic in hardware to deal with it. Ever notice how traditional systems with interrupts come with input FIFO's and DMA handling all in hardware?
    Now, my proposed event handling in hardware model is equivalent to a system with interrupts but no priority levels and hence no nested interrupts. One could argue that adding such priorities would allow some inputs to be handled faster than others. My response to that is that if you have such a situation then on the Prop one can use another COG dedicated to handling those "high priority" faster events.
    The event based system is, I believe, simpler in hardware implementation and the event based model of programming is simpler than an interrupt based system, especially priority based interrupt systems (no race conditions).
  • TorTor Posts: 2,010
    edited 2015-07-13 07:16
    Sounds like an infinite loop to me.



    Assuming you didn't mean the ND hardware implemented an infinite loop.. but that mapping such a system onto a Propeller would be an infinite process.. yes, I didn't suggest that as an option really :), just as a data point when people discussed interrupts and the need of a stack, because not all interrupt systems used one (and the ND variant was extremely powerful).

    (Sigh. I'm giving up on the 'quote' mechanism of this forum. Can't be done in a sane way.)
  • Heater.Heater. Posts: 21,230
    I think use  of
    tags like that is just fine. We know what they mean even if the forum does not :)
  • MJBMJB Posts: 1,235
    yup, and therein lies the problem. - the programmers software has to sample the pins, at a different time from the WAIT opcode, and so you are never really sure if that sample is what the HW opcode reacted to.



    did not think about this before ...
    so the WAITxxx  really should do a capture of INA to make sure there is no such glitch.

    but usually a WAITxxx  followed by INA is luckily quite close ...
  • jmgjmg Posts: 15,173

    A system with interrupts can miss incoming events when they arrive faster than you can handle them. A UART can over flow it's buffer and drop bytes if you can't deal with them fast enough, etc, etc, etc. You have to design your system around the required input rates.
    That's over-run which is quite different from lost.
    Lost means totally missing an event, because something else masked it.
    Other MCU's solve this, with sticky HW flags - it can be managed easily enough.
    A Prop HW event handler would need to do the same, at the detail level.
    It is ok to defer, but Lost aka 'vanish into thin air', is not engineering design.
  • Heater.Heater. Posts: 21,230
    edited 2015-07-13 10:39
    jmg,
    Yes, it's overrun, as in "lost", same thing.
    Yes, interrupt hardware generally latchs the input triggers. That ensures the interrupt source is remembered while some higher priority interrupt handler is running (or even the handler for that source has not finished with the last interrupt on that source)
    Note that with hardware latches on the interrupt source no further edges can be accepted until the interrupt handler acknowledges the interrupt. That is to say overrun can still happen and interrupts lost.
    The event handler scheme I outlines puts incoming events into a FIFO. That serves the same purpose as the latch bits in normal interrupt controllers.
    Note that the event model with FIFO ensures interrupts are handled in time order. No overly rapidly occurring event can block lesser frequent events. That's a bonus some times. 
    Admittedly an regular interrupt controller system will have a lower latency than the event handler scheme I describe but as I said, if that kind of speed is an issue then use another COG for it.  
      
  • jmgjmg Posts: 15,173

    Note that the event model with FIFO ensures interrupts are handled in time order. No overly rapidly occurring event can block lesser frequent events. That's a bonus some times. 
    True, but other times it kills determinism, which may matter more, so that would need to be a user-choice.

  • Heater.Heater. Posts: 21,230
    edited 2015-07-13 12:22
    jmg

    True, but other times it kills determinism, which may matter more...

    As soon as you allow external happenings to steer the path of execution of your program you have lost determinism no matter what. I mean, you can no longer tell what order your code will be run in or when. This loss of determinism is true no matter if you use interrupts or events or even polling.

    One can regain some determinism with interrupts. An interrupt handler triggered by a high priority interrupt will have a deterministic, known, latency to that event. But of course that just shifts the non-determinism to anything else going on in your program. It has to go somewhere.

    The event based scheme I describe has a determinism all of it's own. Remember that all event handlers begin, do their stuff and then terminate as soon as possible. Their worst case execution time is known. The worst case scenario is when all possible event sources trigger at the same time, thus causing all of them to be placed in the event FIFO. Obviously there will be a long latency as all the evens are executed one at a time until the FIFO is empty.

    But here is the magic, that worst case latency is exactly known! This is generally not true with interrupt based systems.

    ..., so that would need to be a user-choice.

    The user does indeed have a choice. Put that time critical thing that needs short latency into it's own COG. Easy.

    Not only that but the user has the information available to easily make choices. He knows the worst case execution time of his handlers and the maximum latency they need. He knows when he has to distribute event handlers to other processors.
  • kwinnkwinn Posts: 8,697
    I'll bet there was a way to deal with it, and as mentioned so many times before, it's not just a simple addition. 


    Of course there was another way to deal with it, but it took a lot of reshuffling of what code and functions went in what cogs.

    It may not be "just a simple addition", but compared to what is already implemented it can't be all that complicated either. A lot of the required circuitry is already there for the waitxxx instructions.
  • I'll bet there was a way to deal with it, and as mentioned so many times before, it's not just a simple addition. 


    Of course there was another way to deal with it, but it took a lot of reshuffling of what code and functions went in what cogs.

    It may not be "just a simple addition", but compared to what is already implemented it can't be all that complicated either. A lot of the required circuitry is already there for the waitxxx instructions.


    Here is the conflict. Adding the feature will improve, or is likely to improve this use case, but it will come at the expense of other ones that define the product in the market.

    Given how things are intended to be done, and the strong motivation by users to so them as they are used to doing on other products, people will use the feature way more than intended, essentially marginalizing the real differentiator.

    Worse, the limited feature and the intent of the Prop won't be aligned with expectations, which will reduce value perception and increase cost of doing it as intended perception.

    While having it would likely have made your scenario easier, or likely would have, the minor investment you did make expands your ability to use the product as intended and maximize the benefits that go along with that too.

    Props work differently. If they don't, there really isn't a reason for them to exist.

    Props are easy most of the time too.

    That won't continue to be true if the feature set gets diluted by niche case add ons.

    Finally, Props work differently, and how can that actually make sense when they include features intended to make them work the same as everything else?

    Either the Propeller way works or it does not. Which is it?



  • jmgjmg Posts: 15,173
     One can regain some determinism with interrupts. An interrupt handler triggered by a high priority interrupt will have a deterministic, known, latency to that event. But of course that just shifts the non-determinism to anything else going on in your program. It has to go somewhere. 

    Yup, that is exactly the point, The designer makes that call - do not remove that ability.
    Design is always a compromise, it is being able to control where that compromise is placed, that matters.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-13 20:22
    Actually, that ability isn't in spec, and for good reasons gone over many, many times.

    We should not add it.


  • Heater.Heater. Posts: 21,230
    edited 2015-07-13 20:23
    potatohead,

    ...essentially marginalizing the real differentiator.

    Did I say it before, I love it when you speak MBA ?

    Can I translate?

    A Propeller is not, by design, a common or garden micro-controller.

    Back in the day I might say: "If you need a 555 timer chip to solve a problem don't use a micro-controller. If you have a problem that only a 555 chip can solve then use that".

    Today I say: "If you need an ARM or MIPS like System on a Chip (SoC) to solve your problem don't use a Propeller. If you have a problem that only a Propeller can solve then use that".

    Different machines for different purposes.


  • potatohead,

    ...essentially marginalizing the real differentiator.

    Did I say it before, I love it when you speak MBA ?

    Can I translate?

    A Propeller is not, by design, a common or garden micro-controller.

    Back in the day I might say: "If you need a 555 timer chip to solve a problem don't use a micro-controller. If you have a problem that only a 555 chip can solve then use that".

    Today I say: "If you need an ARM or MIPS like System on a Chip (SoC) to solve your problem don't use a Propeller. If you have a problem that only a Propeller can solve then use that".

    Different machines for different purposes.




    I think one reason that this distinction between "embedded processor" and "general purpose processor" keeps getting blurred here when looking at P1 or P2 is that the Propeller's designer, Chip Gracey, intendes that the P2 at least will support a self-hosted development environment. That indicates to me that the maker of P2 expects it to perform well as a general purpose processor and not be limited to "embedded" applications where determinism is required.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-13 21:12
    Yep, and yes you have.  :P  Sucks, but when speaking to a product / market / user argument, framing it as I did makes great sense.  Takes a small book otherwise.

    >>Different machines for different purposes.

    If a bunch of "me too" features get tossed in there, we end up with something that isn't as much a Propeller, as it is a whole lot like every other messy thing out there, and all that does is dilute the "worth it" use cases.  Instead, maximizing those makes the most sense.  That's where the money is.

    Think Apple.  Lots of people absolutely *hate* Apple for it's hard stand on some specific things.  Same argument too:  It would be so much easier, if....  and what they don't see is the cost / benefit analysis performed by Apple and how that impacts the overall product value.

    For a lot of people, that won't matter much.  But for those to whom it does matter, it's worth paying for, but only when it's effective, clear, etc...  Diluting the idea of what a Propeller is stands to very highly likely reduce the value for those using it for what it is, while it does not stand much of a chance of improving the appeal to others seeking a fundamentally different thing.

    Maybe that explains it better.

    No interrupts then. 

    That is how a Propeller works.  And each time we work through a case, the overall value of that improves for everyone, assuming it's shared.  Today, we are good at a lot of stuff without interrupts.  Arguably, the use value of the product is much higher now than it was at release time.

    Had the product been less well defined, a lot of that would not have happened, and instead we would very likely have seen every kludge, hack, etc... seeking to expand on whatever limited interrupt like thing it would have been, resulting in a far less potent and clear use value today.

    It's either worth it, or it's not.  I say it's worth it, and do not support diluting that one bit. 


  • Heater.Heater. Posts: 21,230
    David,
    Nah, I bet almost nobody here has read the few posts where Chip has made about such things. 
    I always took Chip's statements about that as an expression of shock and horror at the huge size and complexity of the typical computer and it's software now a days. I believe we all feel that from time to time.
    Self hosted development need not mean becoming like everyone else. 

  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-13 23:27
    >>That indicates to me that the maker of P2 expects it to perform well as a
    general purpose processor and >>not be limited to "embedded" applications
    where determinism is required.

    Not at all.  Chip said no such thing.  And BTW, I've read every single one of them, and have had some nice long chats about this with Chip and others when I was able to.

    What he did say is he sees the messy and constantly moving development environment so often associated with embedded and microcontrollers as a very serious pain in the Smile.  Not worth it.  He also said, he sees a distinct lack of attention being given to real time, clock edge response and interactivity too.  Finally, dealing with signals, math, and natural phenomena is difficult, often expensive, and despite those things, in growing demand and of very high overall utility.

    The Propeller 2 is aimed at providing an alternative that maximizes the benefit of each of those things.

    1.  Being self hosting means a stable, finished, useful, practical, reference implementation.  Write code on it now, and write code on it 20 years from now, and it's going to work in the same simple, predictable way.

    This does not get in the way of gcc and friends.  But it absolutely does insure a lean path for development throughout the life of the product.  See the Propeller tool, and it's code base aging out as a primary case in point.  Lots of work had to get done to move that off Delphi and x86 ASM, and while others may choose those things, you can be sure Chip won't want to, other than what is needed to make the product.  When it's time for Propeller code, he's going to use the environment he's setup over the years for that.  Once bootstrapped onto the P2, he can use that going forward.  Notice how the P2 tools were being done in the P1 toolchain context?  That's why.  Once he gets his set finished, it's finished.  Others can expand on it, etc... no worries.

    2.  The FPGA is awesome at real time, clock edge, etc... kinds of things, but those are obtuse.  General purpose processors have their problems too.  The idea of concurrent, multi-processing, which is loosely referenced by "multi-core", has some specific benefits in the P1 that Chip wants to greatly expand in the P2.  This is why it has robust I/O, math, etc...

    3.  I'm sorry, but absolutely nothing touches how easy and potent SPIN + PASM is.  Other environments have their merits, but they do not have the lean, simple, robust, easy nature SPIN+PASM does.  If the P2 self-hosts, it can also very likely load tools for P1, and provide a consistent development ecosystem native to, and designed for and with the whole solution.  This is awesome.

    It's awesome, even if few people actually build code on the chip.  Why?  Because it will have all been designed to mesh together, and that's unique these days.  It can also make some pretty advanced things accessible to people who normally would never, ever climb the messy hill to get them done on other tool chains and workflows.

    For people interested in those benefits, seeing the self-hosting is much more than some nostalgia or other.  It's the reference, "how to do it the Propeller way" environment they know will be there for them as long as they have Propeller devices.

    For others, that's the point where other software gets made, and the usual update grind continues as normal.

    To each their own.  No harm being done here, just a big benefit for those interested in making use of it.

    4.  P2 is bigger in scale, and it's been made obvious that we need to be able to run bigger programs and get at more memory.  HUBEXEC and related changes will expand on and formalize what Bill started with LMM.  This means some general computing type tasks can be done, not that it's a design goal or other.  It may just be the bigger scale warrants bigger programs and data.

    The idea of a more general CPU has been tossed about for P3, but nothing more. 
  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-13 21:15
    Regarding SPIN + PASM, and the potential self-hosting:

    Here's another way to think about it.  SPIN+PASM is fairly complete.  We know a couple of new things are needed to fully exploit a Propeller today, but if that's all one had to work with, a whole lot can be done.  And learning about SPIN+PASM is pretty lean.  Lots of complex things just aren't there, or where they are, happen to be packaged up and presented in simple, robust ways.

    The same will be true for P2. 

    Once a person has learned that stuff, they will be able to completely ignore the rest of it.  Operating system changes, language changes, changes, changes, changes...  Won't need 'em.  Won't need to care about them at all.

    What does that do?

    It frees that person to focus on whatever thing they want to get done, rather than get bogged down on a bunch of meta-tasks getting in the way.

    In the end, people want to get stuff done.  Once it's done, they want to get other stuff done.

    SPIN+PASM is aimed right at that use case.  Write it, run it, get it done, next.

    Lots of us want this, and we want it because the value of staying focused on getting stuff done is sufficient to be worth paying attention to.  That's why the P2 will have dev tools that work on a P2. 

    Lots of us don't want this, or just can't do it that way for any number of perfectly valid reasons.  And that is why the P2, like the P1, will have lots of other tools available that run on other things, but will produce code for a P2.

    We get the latter no matter what.  And we need it too, make no mistake. 

    The former is special, and I for one really want to see the vision in Chip's head play out, because having used the P1, I expect it to be lean, mean, productive, and a hell of a lot of fun.




  • Regarding SPIN + PASM, and the potential self-hosting:

    Here's another way to think about it.  SPIN+PASM is fairly complete.  We know a couple of new things are needed to fully exploit a Propeller today, but if that's all one had to work with, a whole lot can be done.  And learning about SPIN+PASM is pretty lean.  Lots of complex things just aren't there, or where they are, happen to be packaged up and presented in simple, robust ways.

    The same will be true for P2. 

    Once a person has learned that stuff, they will be able to completely ignore the rest of it.  Operating system changes, language changes, changes, changes, changes...  Won't need 'em.  Won't need to care about them at all.

    What does that do?

    It frees that person to focus on whatever thing they want to get done, rather than get bogged down on a bunch of meta-tasks getting in the way.

    In the end, people want to get stuff done.  Once it's done, they want to get other stuff done.

    SPIN+PASM is aimed right at that use case.  Write it, run it, get it done, next.

    Lots of us want this, and we want it because the value of staying focused on getting stuff done is sufficient to be worth paying attention to.  That's why the P2 will have dev tools that work on a P2. 

    Lots of us don't want this.  And that is why the P2, like the P1, will have lots of other tools available.




    This sounds great except that apparently a large part of Parallax's target market didn't want Spin+PASM. What do you suggest be done about that? Better marketing that emphasizes why Spin+PASM is the right solution?
  • jmgjmg Posts: 15,173


    I think one reason that this distinction between "embedded processor" and "general purpose processor" keeps getting blurred here when looking at P1 or P2 is that the Propeller's designer, Chip Gracey, intendes that the P2 at least will support a self-hosted development environment. That indicates to me that the maker of P2 expects it to perform well as a general purpose processor and not be limited to "embedded" applications where determinism is required.Yes, an interesting 'aspiration', and  it may get close(r) with a P2

    I saw these numbers  on a Raspberry Pi, which is the closest thing now to true small self hosted development :

    ["It takes a little time to build on a plain old Raspberry Pi: around 55 minutes. The Raspberry Pi 2 with `make -j4` builds the compiler in 6½ minutes."]

    Quite a jump in speed, and illustrates just how much RAM and MHz are needed.
    ( Those first Gen Build times of 55 minutes place it in the curiosity basket  )

    Luckily, those can come for a modest price.



  • I think one reason that this distinction between "embedded processor" and "general purpose processor" keeps getting blurred here when looking at P1 or P2 is that the Propeller's designer, Chip Gracey, intendes that the P2 at least will support a self-hosted development environment. That indicates to me that the maker of P2 expects it to perform well as a general purpose processor and not be limited to "embedded" applications where determinism is required.Yes, an interesting 'aspiration', and  it may get close(r) with a P2

    I saw these numbers  on a Raspberry Pi, which is the closest thing now to true small self hosted development :

    ["It takes a little time to build on a plain old Raspberry Pi: around 55 minutes. The Raspberry Pi 2 with `make -j4` builds the compiler in 6½ minutes."]

    Quite a jump in speed, and illustrates just how much RAM and MHz are needed.
    ( Those first Gen Build times of 55 minutes place it in the curiosity basket  )

    Luckily, those can come for a modest price.



    I suspect whatever Chip puts together as a self-hosted development environment for the P2 will build a lot faster than a GCC-based toolchain. :-)
  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-13 21:25
    >>What do you suggest be done about that? Better marketing that emphasizes why Spin+PASM is the right solution?

    There is Chip and the P2 and the tools Chip makes and uses.

    There is Parallax and the P2 and the tools that Parallax, along with others, makes and uses.

    Two different things.

    Parallax is going to take the P2, and sell it into ALL of their markets.  The, "I want what Chip made" market will be using SPIN + PASM, and will be doing so precisely for the reasons I gave above.

    The education / industry markets, or put better "I want the chip Chip made" markets, will be using C, maybe even Python or something along those lines depending, and those tools will all end up as they are today.

    It is absolutely not necessary to judge either market.  In fact, doing that is bad for business.

    Instead, serve each market.  That's what is most likely to happen, and if it were me, precisely what would happen.

    The minute we get working P2 images, and we get some commits, the work on C, and friends will begin just as it did last time.  It may well be C is up, running, and there before the self-hosted SPIN+PASM is, just like it was looking last time we did this.

    And that is AWESOME!  Had that one been a go, it would have rocked hard.  Released outta the gate with "pro" or "industry" tools, and Parallax education would have had a field day, as would most of us wanting to give it all a go.

    I can tell you, there are times and plans I have that will use C.  I was glad we had C moving on the last image too.  Cool beans.

    I can also tell you there are times and plans I have that will center on SPIN+PASM and the self-host is an important part of that.

    A whole lot of it depends on the context.  If it's for me personally?  I'm gonna be using SPIN+PASM, and the same is likely true for hobby / entertainment and general purpose learning / exploring type use cases.

    If it's professional?  It's highly likely to be C.

    The thing to realize is SPIN+PASM are going to happen anyway.  That's part of Chip's design vision.  No matter what, that happens.  The rest is driven by demand.

    Parallax need only meet that demand to make money.
  • Heater.Heater. Posts: 21,230
    jmg,
    What was it that was taking 55 minutes on a Pi?
    Certainly building prop-gcc on a Pi took me a long time. Building Simple IDE also. Building the Qt libs that Simple IDE uses took over a day!
    I find it all mind boggling sometimes. One can build a C compiler on an old z80 CP/M system in a lot less time than that.  Heck, I have done it on a z80 emulator running on the Propeller!
     
  • Heater.Heater. Posts: 21,230
    potatohead,

    ...when speaking to a product / market / user argument, framing it as I did makes great sense.

    I kind of get what you are saying there. But I have a problem with it.

    A statement like "essentially marginalizing the real differentiator." can be made of any product anywhere, any time. From lollipops to airliners.

    It is, essentially, meaningless.

  • Here's a great example of these ideas in action:

    Two machines:  The BBC Micro https://en.wikipedia.org/wiki/BBC_Micro  and the Apple 2 https://en.wikipedia.org/wiki/Apple_II

    Both took the "you have what you need in the box" approach to things.  The way they did it differs a little, with the BBC Micro extending BBC Basic to including assembly as part of the language, and the Apple 2 including a mini-line assembler, monitor and Applesoft Basic that could call assembly and machine language programs.

    Each of these machines sprouted great little ecosystems.  Lots of software, tools, applications, etc... happened on both. 

    Notably, people who didn't know much at all, jumped on both machines and produced amazing software and projects. 

    Here's one:  https://en.wikipedia.org/wiki/Elite_(video_game)

    On the BBC Micro, Elite packed more into 32K than anyone ever expected.  Was it industry best practice?  Heck no!  Go and read the source code sometime.  It's as amazing as the game is. 

    On the Apple 2, the Merlin assembler, which ran very nicely on an Apple 2, was used to develop for all sorts of things.  Look up Beagle Brothers for a nice set of tools that made great use of the "in the box" development capabilities, among many others.

    Both machines were open, easily understood, easily modified, etc...

    While a lot of use cases were education, people ended up doing automation, lab test, data collection and measure, audio, and a ton of other things too.

    Both machines ended up getting more industry standard tools as well, and saw business use, general computing, etc...

    The P2 could very well end up working like a BBC Micro and Apple 2 did, complete enough to build pretty awesome stuff on without having to know a lot to do it. 

    There are efforts to bring that back to current generations too.  The Pi is a great little machine, and the Beeb just did another "micro" in the form of the Micro:BIT http://hackaday.com/2015/07/07/the-bbc-microbit/

    What Chip does and how he does it centers right in this sweet spot, and it's not really the same as various industry sweet spots, nor does it have to be to be worth doing.


  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-13 21:46
    I have a problem with it too.  Just so you know.  See below.

    >>It is, essentially, meaningless.

    Outside of a context, completely agreed.  But we have a context here, and one was supplied in the post.  Nobody likes reading that much, but it's accurate, conveys the necessary ideas, and if there is to be some debate on the matter, framed up so I could employ a TON of resources to back my case. 

    That said, I sure don't mind translation.  Not a worry for me personally.  And as you know, I'll write short books when I feel like it too.

    I've added a couple of posts to augment the core points I'm trying to make.  What's going to happen is people will center in on an example, or side line the dialog with minor points here and there.  The advantage of that language is all those little nubs get rubbed off, and one is left with the idea.  Sometimes that's better / easier / faster.  Sometimes it's not. 

    :)


  • Heater.Heater. Posts: 21,230
    potatohead,
    Please do continue with the MBA speak and short books. :)
  • jmgjmg Posts: 15,173


    I suspect whatever Chip puts together as a self-hosted development environment for the P2 will build a lot faster than a GCC-based toolchain. :-)

    Of course :), but users will still want to build GCC-like toolchains.

    A nice target for P2 would be something like Borland's Turbo Pascal  - ~ 39k, as Compiler and IDE ?
    Not sure how large the source was, (or if it self-built), but it cannot have been huge.

    I see Google turbo pascal  size  gives some amusing results.
  • I suspect whatever Chip puts together as a self-hosted development environment for the P2 will build a lot faster than a GCC-based toolchain. :-)

    Of course :), but users will still want to build GCC-like toolchains.

    A nice target for P2 would be something like Borland's Turbo Pascal  - ~ 39k, as Compiler and IDE ?
    Not sure how large the source was, (or if it self-built), but it cannot have been huge.

    I see Google turbo pascal  size  gives some amusing results.


    I doubt that a GCC-based toolchain will ever compile on the P2 target platform though. :-)
Sign In or Register to comment.