Shop OBEX P1 Docs P2 Docs Learn Events
GCC / Eclipse and Propeller 2 - seeking developers - Page 13 — Parallax Forums

GCC / Eclipse and Propeller 2 - seeking developers

1101113151622

Comments

  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 19:42
    Well, I think combining the efforts is still the way to go. I don't disagree with the "pro" gcc points you've made Steve. What I mean by that is essentially take the lessons learned on Catalina, and apply them to gcc, keeping a very open mind on what the process might look like. ie: source binding as opposed to binary linking, as one example where a significant advantage might lie.

    Perhaps that should be the initial, foundation discussion beyond this one?

    Assume gcc will be used, and defer on fork or not, and then also assume that Catalina has proven some possible and practical means and methods, and factor that all down. Actually, that discussion will probably tell us whether or not a fork makes any real sense. I'm quite sure Atmel went down the same path.

    So, why not do that right now? Let's limit the scope to foundation things, until we've defined:

    1. necessary mods to gcc
    2. minimum required CPU definitions to support C
    3. build / link / bind process
    4. binary format
    5. Input and output required for GUI operation later.

    Those done in tandem with what we expect the VM to look like.

    It really should look something like that at this point. I submit there is more than enough experience in this group to talk that through, and vet it at a higher level.

    Then:

    Spec it, divvy up the work, and it starts!

    In parallel with that discussion, a honest competitive analysis, based on what we know right now, so that the spec can be linked to those things to insure we actually are adding value, and not creating our own reality.

    That's my core recommendation. I know all the pieces are not there, but I do know we can't realistically proceed without establishing some things at that level, and hope to have any chance at a realistic outcome. I am making that recommendation, because I strongly agree with your rather binary assessment of the need Jazzed. Connecting our specification, or at the least rough project plan to the strengths of the chip, as we currently understand them, is core to that success you just articulated.

    Re: Catalina for Prop II. Absolutely! I would expect nothing less, and frankly, that is a very good thing. Will keep the project relevant, as Catalina right now, for better or worse, is the working benchmark, simply because it does work. Bet it can be done better too.

    Let's pick both of these apart and see what we really know. I want success too. I want it because I really love the technology, and I like this community, and the people in it, and Parallax as a company does a whole lot of basic things right, when they clearly could choose not to do so. Rewarding that is a must.

    FYI: The questions I held back are those that involve aligning the product of this effort to the strengths of the chip, and of course, what those strengths are. The former isn't too tough to talk through. The latter is going to be rather messy, and will need moderation, IMHO.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 19:46
    jazzed wrote: »
    I am absolutely sure for various reasons that Ross will produce a Propeller 2 Catalina regardless of whether a GNU/GCC port includes Propeller 2.

    Yes, I have said so in this very thread. At the moment, it appears this will be a fairly straightforward task (but of course I can't be absolutely sure until the Prop II instruction set is properly documented).

    However, that doesn't mean I wouldn't also be happy to contribute to any public GCC effort if one eventuates.

    Ross.
  • jazzedjazzed Posts: 11,803
    edited 2011-05-28 19:50
    Beautiful. Just Beautiful. Thanks.
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 19:53
    potatohead wrote: »
    take the lessons learned on Catalina, and apply them to gcc, keeping a very open mind on what the process might look like. ie: source binding as opposed to binary linking, as one example where a significant advantage might lie.

    While I agree that source level binding was a very clever solution to the problem of not having a more traditional assembler and linker available, I don't see any advantages to it over the more traditional approach. What advantages does source level binding have over traditional linking? I haven't heard any compelling arguments in its favor other than that it already exists in Catalina.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 19:57
    Well, will we have a more traditional assembler and linker? Will we have one prior to the release of Prop II?

    What does a more traditional assembler look like, given how the Prop is designed? There is PASM, and there is LMM. Assembler for both? How does that connect to the VM?
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 20:01
    potatohead wrote: »
    Well, will we have a more traditional assembler and linker? Will we have one prior to the release of Prop II?
    I think that could be done. I don't think binutils is as hard to port as gcc and can probably done fairly quickly. That is just a gut feeling though. I haven't done it myself. We should verify that with someone who has experience doing a port.
    What does a more traditional assembler look like, given how the Prop is designed?
    I don't know that the Propeller assembler syntax is particularly a problem. It just needs to generate relocatable output so that object files can be combined into libraries and also a lot of additional debug information to drive tools like gdb. This is handled by the GNU assembler, gas, which is part of binutils along with the GNU linker, gld.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 20:04
    Relocatable...

    In the PASM model, how is that done? I don't see branch relative among the instructions. Secondly, in the LMM model, relocatable code IS possible, IF the VM is written to provide that functionality.

    COG as micro-code as well as COG as primary executable both need consideration, do they not?
  • RossHRossH Posts: 5,519
    edited 2011-05-28 20:43
    David Betz wrote: »
    While I agree that source level binding was a very clever solution to the problem of not having a more traditional assembler and linker available, I don't see any advantages to it over the more traditional approach. What advantages does source level binding have over traditional linking? I haven't heard any compelling arguments in its favor other than that it already exists in Catalina.

    Hi David,

    I'm not actually advocating this technique - I used it because there were no other options available on the Prop I. What would have been the point of defining (or adopting) a complex object format when neither SPIN nor PASM use it? But this is really another issue yet to be discussed in this thread.

    However, since I adopted it, I have discovered several advantages ....

    1. It virtually eliminates the need for the multiplicity of complex tools you need with the more traditional aproach (e.g. almost the whole of the GCC 'binutils' exists for no other reason than performing various arcane manipulations of binary formats and libraries). These things are just a chore if you are only ever building binary images for use in embedded environmnents. As you (or perhaps it was jazzed) pointed out earlier - not only will you need 'binutils', you then need another set of utilities to build the final binary images. And does all this complexity help you get the programs loaded? No - you still have to do all that work as well!

    2. (related to point 1, but not the same) It allows you to leverage existing 'specialist' tools. Why should the GCC team have to build binary images at all, when Parallax (or in my case Michael Park!) put a lot of effort into building a tool to do this very job? Why reinvent the wheel when you don't have to? Why should they even have to build assembers? Has anyone pointed yet out that the GAS assembler will be incompatible with the PASM we are all used to? The instructions may use the same mnemonics, but the syntax will probably differ - and I doubt whether any of the directives will be the same.

    3. It makes quite sophisticated code optimization techniques almost completely trivial to implement. This is because the source code (PASM) contains information about the program that is lost (or much harder to use) as soon as it is converted to binary form.

    Ross.
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 20:46
    potatohead wrote: »
    Relocatable...

    In the PASM model, how is that done? I don't see branch relative among the instructions. Secondly, in the LMM model, relocatable code IS possible, IF the VM is written to provide that functionality.

    COG as micro-code as well as COG as primary executable both need consideration, do they not?

    Relocatable code does not necessarily imply relative addressing. Relocatable just means that some addresses aren't fixed until link time. Actually, relative addressing often means relocation isn't necessary since code can be positioned anywhere in memory because branches are relative to the current PC. The GNU linker can certainly handle relocating references to absolute addresses whose values are not determined until link time. In fact, that is its primary function.

    I'm not sure that the linker needs to be concerned about "COG as microcode" since, in that case, the COG microcode is used to implement a virtual machine that is the target of GCC rather than the native COG instruction set itself. In the case of LMM, there is a mix between actual COG instructions and "VM" instructions that are part of the LMM kernel. Does the Catalina binder handle this distinction? By the way, I'm not trying to argue that the Catalina binder has nothing to offer. I just haven't heard about anything that it does that can't also be done by a more traditional linker in the binary domain.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 20:52
    potatohead wrote: »
    Relocatable...

    In the PASM model, how is that done? I don't see branch relative among the instructions. Secondly, in the LMM model, relocatable code IS possible, IF the VM is written to provide that functionality.

    In the PASM model? Not sure what you mean by that.

    In Catalina relocation never needs to be done because everything is kept in source format until the final assembly. In the more traditional model, all intermediate objects (especially library objects) need to carry around relocation information which tells you where you will need to fix up references before you can run it. This can be done at load time, but in the case of the propeller the loaders will probably not be sophisticated enough, so the last step in the compile process will be to generate an absolute image of some kind.

    Ross.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 20:55
    Ross, intriguing! (referring to your post before this last one) And given the resources at hand, compelling, IMHO.

    "in the PASM model" really refers to just assembled PASM, that's all. Well, that and it being in the COG. As I note below, I was mixing a coupla things together there.

    David, of course that makes perfect sense. I was kind of fixated on "traditional assembler", and wondering about the differences between a prop and other CPUs. Mixed some things together there.

    Shouldn't we then have a higher level discussion on the merits of those tools? Is the goal to duplicate the traditional workflow, or facilitate the writing of C programs?

    And, how does that difference pay off for attracting prospective users of the technology? Are they interested in doing the same WORK, or writing the same code, or both, and why?

    I'm sensing the need to have it work the way it always works elsewhere, because???

    Have we made the case that is necessary? If we had a tool chain that took C code, and output easy to use binaries, is that enough?

    Edit: It's gcc, but easier, kind of thing is what I'm getting at. Is is possible to leverage the familiarity of gcc, without having to reproduce the workflow exactly? Does it make sense to do that?
  • RossHRossH Posts: 5,519
    edited 2011-05-28 20:55
    David Betz wrote: »
    In the case of LMM, there is a mix between actual COG instructions and "VM" instructions that are part of the LMM kernel. By the way, I'm not trying to argue that the Catalina binder has nothing to offer. I just haven't heard about anything that it does that can't also be done by a more traditional linker in the binary domain.

    Understood, David!

    Not sure about other kernels, but in Catalina there is nothing that needs special treatment by the binder or the assembler. Catalina's "primitives" are just norrmal PASM instructions.

    Ross.
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 21:06
    potatohead wrote: »
    Shouldn't we then have a higher level discussion on the merits of those tools? Is the goal to duplicate the traditional workflow, or facilitate the writing of C programs?

    There is more involved with developing programs in C than just compiling code that will run on the target processor. As I'm sure you know, code often has bugs that need to be fixed. The GNU toolchain includes the gdb debugger that makes heavy use of debugging information passed through the compile/assemble/link process. Of course Catalina also provides debugging facilities and I'm not sure how they compare with gdb but that comparison should be made to determine which provides the more effective debugging tool. If nothing else, gdb supports debugging C++ programs which require a tremendous amount of additional debugging information that is probably not handled by the Catalina debugger. This makes perfect sense because Catalina doesn't support C++ but being able to compile and debug C++ may be important to Parallax customers. Even if C++ is not needed, it could be that gdb can provide superior debugging. As I said, that comparison would have to be made buy someone who knows both.

    Ross: does the Catalina debugger support source level C debugging with access to local variables, function arguments, struct and union inspection using the source code field names, variable types, etc?
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 21:13
    @David: Thanks. That is exactly the kind of thing I was getting at.

    There needs to be a discussion as to the relative value of things, not just based on impressions. It occurs to me that Parallax should entertain some information gathering. Either just do it, or contract for it. What is C++ support worth, as opposed to just C?

    Honestly, has any material study been done to establish core requirements, beyond "the other guys have it?"

    There likely isn't time or resources to just do everything, and do it right, and do it well, so what's the priority and why?
  • RossHRossH Posts: 5,519
    edited 2011-05-28 21:13
    potatohead wrote: »
    Shouldn't we then have a higher level discussion on the merits of those tools? Is the goal to duplicate the traditional workflow, or facilitate the writing of C programs?

    And, how does that difference pay off for attracting prospective users of the technology?

    I'm sensing the need to have it work the way it always works elsewhere, because???

    Have we made the case that is necessary? If we had a tool chain that took C code, and output easy to use binaries, is that enough?

    Yes, I wish this discussion had taken place. We are suffering from its omission.

    From a pure software engineer's perspective ... and ignoring the C++/Objective-C issue for the moment (which I acknowledge is a legitimate want) ... if I can push ANSI C in one end, and get Propeller binaries out the other end, do I care what the intervening toolset is? No, of course not! Why would I?

    Without intending to get anyone "het up" about it, there seems to be a certain amount of simple "me too"-ism about adopting GCC. We must use it because everyone else uses it. But in fact, everyone else does not use it. Many companies (my own included) do not use GCC at all, even though it supports the processors we use (and it's free!). We (and many others) choose to pay to use a commercial compiler, like ICC or IAR. Why? Because we are willing to pay a relatively small amount for better tools, better support and better results.

    I understand that Parallax Semiconductor may be a different beast, but the original Parallax was not traditionally a "me too" kind of company!
  • RossHRossH Posts: 5,519
    edited 2011-05-28 21:14
    David Betz wrote: »
    Ross: does the Catalina debugger support source level C debugging with access to local variables, function arguments, struct and union inspection using the source code field names, variable types, etc?

    Of course.

    Ross.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 21:17
    It is not too late to have these discussions. They should happen. I can't see a useful work product happening without them. These are the same discussions we use when bringing new products to market, and we consult with vendors on the same. A solid vetting is needed here, or it's just all self-feeding, which is very highly likely to not be productive.

    -->Free tools are good, but I totally identify with paying for better tools and support. Time to market arguments alone make that equation favorable more times than a lot of people would want to admit. Fair check on, "everybody uses them", IMHO.

    Given the nebulous scope we've got right now, what would dev time be? Few man months? Maybe that's fair. You guys tell me.

    Let's say it's 6 man months. That's rounded to 1K man hours. At any kind of reasonable contracting rate, that's not small change folks.

    Another recommendation, given that, is that some real money be spent vetting basic requirements, by identifying the potential new customer adoption opportunity, and what core differentiators need to be there in order to qualify for those opportunities.

    That exercise will answer some very basic things in play here, resolving some of the project definition problems rather easily. There is what we would prefer to see, there is what is possible, and there is what will bring in the dollars. In the end, this is all about the dollars, is it not?
  • RossHRossH Posts: 5,519
    edited 2011-05-28 21:21
    potatohead wrote: »
    It is not too late to have these discussions. They should happen.

    Yes, I agree it's not too late - it's just a bit confusing to have the two types of discussion (i.e. technical and marketing) "interleaved" in the one thread.

    Ross.
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 21:25
    RossH wrote: »
    From a pure software engineer's perspective ... and ignoring the C++/Objective-C issue for the moment (which I acknowledge is a legitimate want) ... if I can push ANSI C in one end, and get Propeller binaries out the other end, do I care what the intervening toolset is? No, of course not! Why would I?

    Well, as I said earlier, there is the issue of which debugging tools are more capable. I don't know the answer to that but it sounds like Catalina's might be more sophisticated than I had though. I don't tend to use debuggers very often myself so I haven't had a chance to try Catalina's although I have used gdb some.

    There is also the issue of which compiler, gcc or lcc, has the potential to generate better code. I know that Ross had to write an optimizer for Catalina so that suggests to me that LCC itself may not be that good at optimization. I think gcc can be pretty good at optimization but it will be hard to measure which is actually better for generating good Propeller code without having a gcc port to compare with Catalina. I'm not sure how to approach that problem since it requires that we build gcc just to determine if it is a good idea. :-)

    There is also the matter of the representation of code libraries. I have certainly worked with customers who would only deliver their libraries in object form as an attempt to protect their source code. Since Catalina binds at source level all library code must be furnished in source form. This may be an issue for some commercial developers.

    Also, when we start writing really big Propeller programs that run out of giant external memories like jazzed's SDRAM module we may find that having to assemble everything from source on every build is inefficient. However, most development these days is done on very fast machines with huge hard drives so that may not be a concern.

    Those are just a few issues. I'm sure there are many more.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 21:36
    @Ross: Agreed, which is why I held a lot of them back. The ones posed impact requirements though. Really, I don't see how to get away from them.

    So then the big question:

    Who owns this thing? It's gotta be asked and answered, or there will be tail-chasing and such, until it becomes moot.

    Edit: That's probably my last contribution for a while.

    Except for:

    "Parallax has not traditionally been a me-too company" That rings true, and is something that should not be dismissed.

    And, this is a engineering based community, and Parallax is a engineering focused company. I get that. But, I cannot underscore the value of basic marketing type data enough, so that there can be some weighting of things where pure reason and technical merit cannot resolve it otherwise.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 21:42
    David Betz wrote: »
    Well, as I said earlier, there is the issue of which debugging tools are more capable. I don't know the answer to that but it sounds like Catalina's might be more sophisticated than I had though. I don't tend to use debuggers very often myself so I haven't had a chance to try Catalina's although I have used gdb some.
    You will find Catalina's quite similar to GDB's - akthough not quite as 'polished' or sophisticated.
    David Betz wrote: »
    There is also the issue of which compiler, gcc or lcc, has the potential to generate better code. I know that Ross had to write an optimizer for Catalina so that suggests to me that LCC itself may not be that good at optimization. I think gcc can be pretty good at optimization but it will be hard to measure which is actually better for generating good Propeller code without having a gcc port to compare with Catalina. I'm not sure how to approach that problem since it requires that we build gcc just to determine if it is a good idea. :-)
    Actually, it could just mean I'm lousy at writing code generators. :smile:

    But in fact if code optimization is the criteria, I'd bet that neither GCC nor LCC would be the winner. I believe GCC is good, but commercial compilers are generally better still.
    David Betz wrote: »
    There is also the matter of the representation of code libraries. I have certainly worked with customers who would only deliver their libraries in object form as an attempt to protect their source code. Since Catalina binds at source level all library code must be furnished in source form. This may be an issue for some commercial developers.
    When you say "source form", you realize we are talking about compiler generated PASM here? It's not exactly C source code - and in any case, any decent disassembler would generate almost exactly the same output from any object form you used.
    David Betz wrote: »
    Also, when we start writing really big Propeller programs that run out of giant external memories like jazzed's SDRAM module we may find that having to assemble everything from source on every build is inefficient. However, most development these days is done on very fast machines with huge hard drives so that may not be a concern.
    Possibly true - although I doubt you could ever compile a program large enough to make more than a few seconds of difference. But let's be realistic - how likely do you really think it is that customers are going to hook 32Mb external SDRAM chips to a propeller? We here in the forums do crazy things like this - but if I had an application that needed that much RAM, I wouldn't be using a Propeller in the first place.
    David Betz wrote: »
    Those are just a few issues. I'm sure there are many more.

    Yes, there are lots more! But it's useful to tease them all out.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 21:49
    potatohead wrote: »
    So then the big question:

    Who owns this thing? It's gotta be asked and answered, or there will be tail-chasing and such, until it becomes moot.
    Good question. At present we have too many big egos and too little real direction (and before anyone gets mad - I'm including myself in that!)
    potatohead wrote: »
    Edit: That's probably my last contribution for a while.
    I think your contributions have done more to progress the debate than anyone else's (mine included!). Thanks!

    Ross.
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 21:58
    RossH wrote: »
    YBut in fact if code optimization is the criteria, I'd bet that neither GCC nor LCC would be the winner. I believe GCC is good, but commercial compilers are generally better still.
    That's probably true. I suspect the cost and development time for a custom compiler would be quite large but I guess someone would have to look into it to find out for sure.
    When you say "source form", you realize we are talking about compiler generated PASM here? It's not exactly C source code - and in any case, any decent disassembler would generate almost exactly the same output from any object form you used.
    Excellent point! I take it you don't insert the original C source code into the assembly output as comments.
    Possibly true - although I doubt you could ever compile a program large enough to make more than a few seconds of difference. But let's be realistic - how likely do you really think it is that customers are going to hook 32Mb external SDRAM chips to a propeller? We here in the forums do crazy things like this - but if I had an application that needed that much RAM, I wouldn't be using a Propeller in the first place.
    And I just bought stock in jazzed's SDRAM board business! You mean I'm not going to get rich after all??? :-)
  • RossHRossH Posts: 5,519
    edited 2011-05-28 22:07
    David Betz wrote: »

    Excellent point! I take it you don't insert the original C source code into the assembly output as comments.
    No.
    David Betz wrote: »
    And I just bought stock in jazzed's SDRAM board business! You mean I'm not going to get rich after all??? :-)
    Aha! - suddenly the push for C++ makes sense!

    What you can fit in 32Kb of C code on the Prop I will take at least 32Mb if written in C++ for the Prop II!

    He's clever, that jazzed!

    Ross.
  • jazzedjazzed Posts: 11,803
    edited 2011-05-28 22:22
    General functional principles are: 1. Why 2. What 3. Who and 4. How

    Q. Who owns this thing?
    A. Parallax does. Fact!

    Q. Where are we with requirements?
    A. Still in the "Why" phase. The WHY phase is multi-faceted.
    It's not just why do we need to support yada-yada to make more money.

    Q. When will we have a requirements document?
    A. Requirements document is a WHAT "work product" that is produced by/for and reviewed/approved by the customer. Part of the WHAT in this case depends on some studies by key contributors. Other parts of WHAT come from Parallax staff. In this case, the user community has lots of input.

    Q. When will the "work products" officially start?
    A. When Parallax says go!

    Now, there is a lot of WHO questions. Without WHO, nothing can get done.
    Team member and member role selection is on-going AFAIK.

    Ken (Parallax) will talk more on the WHY, WHAT, and WHO of all this later.
    Consultants/Contractors will focus on the HOW of assigned work products when it's time.

    Now, go have fun, get some rest, or do whatever else makes you happy.
    Cheers.
  • jazzedjazzed Posts: 11,803
    edited 2011-05-28 22:27
    RossH wrote: »
    He's clever, that jazzed!
    Don't give me too much credit! :)
  • Heater.Heater. Posts: 21,230
    edited 2011-05-28 22:44
    What is all this talk about "forking" gcc or not?

    Initially you don't get the choice. GCC has a number of targets and you want to add another one. OK copy the code, hack it around and there is your new target architecture.

    Now, are your patches ever going to be accepted by the GCC devs into the main line GCC? Who knows? If your target is popular and in widespread use the perhaps your branch gets merged into the mainline. If it's a niche thing then perhaps it never does.

    A good example here is GCC for the ZPU. It's a fork. Perhaps ZPU support will never make it's way into the main line GCC. This has the unfortunate downside that GCC moves on and the forked ZPU version does not. As a result we find we cannot build zpu-gcc on Ubuntu for example. Just recenently Andey Demenev has created another zpu-gcc fork that supports littleendian zpu. Is that ever even going to be merged back to the zpu-gcc let alone GCC iteslf?

    So, fork or not it's down to Parallax or the user community to keep prop-gcc up to date and in synch with the mainline GCC.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 22:58
    Heater. wrote: »
    What is all this talk about "forking" gcc or not?

    Yes, I wondered about that also. I didn't want to stick my oar in .. but since you have :smile: ...

    I also wonder how Parallax is going to maintain their GCC port in the medium to long term. Hiring a team to do the initial port is one thing - it may not have occurred to them that they will have to make a substantial and long term commitment in both time and resources to support it and keep it up to date (and on multiple platforms, too!). Especially since the level of support expected by Parallax Semiconductor customers is likely to be a lot higher than the level of support expected by Parallax customers.

    Perhaps they think the wider GCC community will support their port for them? Or perhaps they intend to rely on the people in these forums? Either one seems unlikely.

    Another issue to throw into the mix.

    Ross.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 23:01
    jazzed wrote: »
    Now, go have fun, get some rest, or do whatever else makes you happy.

    But jazzed - arguing with each other is what makes us happy!

    Ross.
  • SapiehaSapieha Posts: 2,964
    edited 2011-05-29 00:16
    Hi potatohead.

    Only difference between traditional Bin linker and Ross Code linker are that -- In traditional Bin linker You can build BIN Library's that You can sell without NEED to show Code.
    In Ross version -- You always need Code.


    potatohead wrote: »
    Ross, intriguing! (referring to your post before this last one) And given the resources at hand, compelling, IMHO.

    "in the PASM model" really refers to just assembled PASM, that's all. Well, that and it being in the COG. As I note below, I was mixing a coupla things together there.

    David, of course that makes perfect sense. I was kind of fixated on "traditional assembler", and wondering about the differences between a prop and other CPUs. Mixed some things together there.

    Shouldn't we then have a higher level discussion on the merits of those tools? Is the goal to duplicate the traditional workflow, or facilitate the writing of C programs?

    And, how does that difference pay off for attracting prospective users of the technology? Are they interested in doing the same WORK, or writing the same code, or both, and why?

    I'm sensing the need to have it work the way it always works elsewhere, because???

    Have we made the case that is necessary? If we had a tool chain that took C code, and output easy to use binaries, is that enough?

    Edit: It's gcc, but easier, kind of thing is what I'm getting at. Is is possible to leverage the familiarity of gcc, without having to reproduce the workflow exactly? Does it make sense to do that?
Sign In or Register to comment.