Shop OBEX P1 Docs P2 Docs Learn Events
GCC / Eclipse and Propeller 2 - seeking developers - Page 11 — Parallax Forums

GCC / Eclipse and Propeller 2 - seeking developers

18911131422

Comments

  • ctwardellctwardell Posts: 1,716
    edited 2011-05-26 06:27
    Ego's can provide extra drive to accomplish pretty amazing things, they also get in the way sometimes...

    C.W.
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2011-05-26 08:01
    RossH,
    I respectfully disagree (strongly) with your ratings for C++ on the performance, maintenance cost, complexity, and defects areas. I can't argue with longevity, other than all of the languages listed have been around for decades (or more). I have worked with C++ since it first was released in the 80s. I worked with C before that and still do from time to time, and of course I have done a fair amount of assembly across many CPUs and MCUs. My experience has been that the easiest to code and maintain of those three is C++. Performance in most cases has a lot more to do with hardware specifics (like memory access patterns), and algorithms than it does what language you use. Obviously, the quality of the compiler matters a lot. Sure you can contrive examples, but in real world usage the language choice often makes little difference (assuming quality compilers in all cases).

    There is one area that you leave out, sort of, it's covered a bit with complexity, but "time to code" is a big factor. The amount of time it takes to achieve a working solution (including performance goals and functionality goals) is a HUGE factor to consider in most projects. Assembly is almost always "right out" in this area.

    My C++ code (and that of most of our many millions of lines of code at work) contains zero STL, and zero of the C++ specific RTL (like iostreams and it's related junk) . It's actually quite common for this to be the case. We do use much of the C++ language itself (templates, etc.), although we avoid RTTI and operator overloading is confined to math classes (where it makes sense). Honestly, I think there should be two definitions of C++. One is the Wikipedia defined one, and the other is the one like I (and most of the C++ coders I know) use that excludes STL and the C++ specific RTL components.

    However, in the case of a non-standard architecture (such as the Propeller), especially one that is not very old (like the Propeller), a lot of this goes out the window. For the prop, with it's incredibly nice assembly language, and it's arguably simple architecture, it's easy to code for in PASM. It's also much more difficult to target with a "standard" compiler for other languages. This changes significantly with the Prop 2.

    In any case, it's a REALITY that the world demands C (and sometimes C++) as a language choice for your MCU products. Parallax has run into this, and that is why they are doing this GCC effort. So it really doesn't matter what language any of us prefers. The goal is a good C solution and a good C++ solution, if we can get other languages fairly easily using GCC, then why wouldn't we?
  • Heater.Heater. Posts: 21,230
    edited 2011-05-26 08:02
    dMajo,

    Not being Italian I cannot be sure but this page defines Arduino as a "valuable friend": http://www.thinkbabynames.com/meaning/1/Arduino

    I guess it may also be an old and not much used name today but an Italian friend of mine is always mentioning his friend Arduino. But then he is a Friulian where they have a somewhat different language going on.
  • Dave HeinDave Hein Posts: 6,347
    edited 2011-05-26 08:31
    Sapieha wrote: »
    Most of code moving from one type of Micro to another need some rewrite in some degrees. Even if it is same variant of "C" used on that another Micro.
    In my opinion, one of the strongest attributes of C is its portability. C code can be written in a generic manner that is very portable. There have been many times that I have been able to compile and run code that I got from an internet site without any changes. There is a lot of older C code that contains many #ifdef's to handle C variants, but much of that has gone away with ANSI C compilers. Of course, C code that has been optimized for a particular processor may require a lot of changes to run on other processors. It is good practice to have a generic version that will run on any processor in addition to optimized versions for specific processors. The processor-specific code can usually be hidden in header files or C files that are optimized for that processor.

    Dave
  • SapiehaSapieha Posts: 2,964
    edited 2011-05-26 08:37
    Hi Roy.

    It is precisely what I have write in my previous post.

    That C++ You describe - With omitting some elements from ANSI/ISO directives are no longer C++ it is only derivate of IT.
    I will say in other words -- It is C++ Syntax compatible compiler. BUT don't ask me how it be Named.

    And as I said "C" else that derivate of "C++" is needed for Propeller for as I write in my previous post ->
    <-- "BUT I understand NEED for "C" of any variant on Propeller/Propeller II. That Give people that can only program in "C" possibility to be comfortable to rewrite that parts of code to PORT it to Propeller/Propeller II
    And that is biggest Advantage of having any type of "C" that support Propeller/Propeller II.


    Roy Eltham wrote: »
    RossH,
    I respectfully disagree (strongly) with your ratings for C++ on the performance, maintenance cost, complexity, and defects areas. I can't argue with longevity, other than all of the languages listed have been around for decades (or more). I have worked with C++ since it first was released in the 80s. I worked with C before that and still do from time to time, and of course I have done a fair amount of assembly across many CPUs and MCUs. My experience has been that the easiest to code and maintain of those three is C++. Performance in most cases has a lot more to do with hardware specifics (like memory access patterns), and algorithms than it does what language you use. Obviously, the quality of the compiler matters a lot. Sure you can contrive examples, but in real world usage the language choice often makes little difference (assuming quality compilers in all cases).

    There is one area that you leave out, sort of, it's covered a bit with complexity, but "time to code" is a big factor. The amount of time it takes to achieve a working solution (including performance goals and functionality goals) is a HUGE factor to consider in most projects. Assembly is almost always "right out" in this area.

    My C++ code (and that of most of our many millions of lines of code at work) contains zero STL, and zero of the C++ specific RTL (like iostreams and it's related junk) . It's actually quite common for this to be the case. We do use much of the C++ language itself (templates, etc.), although we avoid RTTI and operator overloading is confined to math classes (where it makes sense). Honestly, I think there should be two definitions of C++. One is the Wikipedia defined one, and the other is the one like I (and most of the C++ coders I know) use that excludes STL and the C++ specific RTL components.

    However, in the case of a non-standard architecture (such as the Propeller), especially one that is not very old (like the Propeller), a lot of this goes out the window. For the prop, with it's incredibly nice assembly language, and it's arguably simple architecture, it's easy to code for in PASM. It's also much more difficult to target with a "standard" compiler for other languages. This changes significantly with the Prop 2.

    In any case, it's a REALITY that the world demands C (and sometimes C++) as a language choice for your MCU products. Parallax has run into this, and that is why they are doing this GCC effort. So it really doesn't matter what language any of us prefers. The goal is a good C solution and a good C++ solution, if we can get other languages fairly easily using GCC, then why wouldn't we?
  • jazzedjazzed Posts: 11,803
    edited 2011-05-26 16:06
    dMajo wrote: »
    I wish success to all of you and if someday you will need and I can help I will, even only with valium :lol::lol::lol:
    :)

    I've heard Valium is very addictive ... like a Propeller.
  • RossHRossH Posts: 5,519
    edited 2011-05-26 16:27
    Roy Eltham wrote: »
    RossH,
    I respectfully disagree (strongly) with your ratings for C++ on the performance, maintenance cost, complexity, and defects areas.

    Hi Roy,

    I can live with you disagreeing with me - as long as you do so on some kind of factual basis :smile:.

    The funny thing is that in fact we are agreeing on the main point - i.e.that C++ (i.e. the "full-blown" language, as defined by ANSI/ISO) is a completely inappropriate language for embedded or real-time work - and is hardly ever used in those domains. Various subsets of it are used - but these subsets tend to vary from company to company (For example, your company allows templates, but others wouldn't. Some companies allow exceptions, others wouldn't. Some companies say they use use C++ - but when you look closely they are essentially just using a C++ compiler to compile C programs).

    On the other hand C (i.e. the "full-blown" language, as defined by ANSI/ISO) is both appropriate and routinely used.

    The other point I is that it is unlikely that any GCC port to the Propeller is going to go much further towards full ANSI/ISO C++ compliance than the Arduino port does - and for the same reasons. If it does go further, it will not be for any compelling reason based on any actual need - and the perception of C++ performance on the Propeller (which is already likely to be an issue) will suffer as a result.

    I think identifying this subset is going to be one of the main decisions the team performing the GCC port will have to make. Do Parallax intend to support only what the Arduino supports? More? Less?

    Ross.

    P.S. You're correct that I didn't include a separate "time to code" metric. If I had, I would have said "time to code and test" instead - and then Ada would be the winner by a very, very large margin. Probably followed by Modula-2, then C++, then C, then Assembly. Poor old Assembly loses out not only because of the complexity, but also because of the sheer amount of typing you have to do!
  • RossHRossH Posts: 5,519
    edited 2011-05-26 16:30
    ctwardell wrote: »
    Ego's can provide extra drive to accomplish pretty amazing things, they also get in the way sometimes...

    C.W.

    :lol:
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2011-05-26 16:59
    RossH,
    I would go further and say that NO ONE uses the full C++ language as Wikipedia defines it other than maybe some academics. It's just not needed or realistic. Also, there is no compiler the implements it all completely.

    Roy
  • RossHRossH Posts: 5,519
    edited 2011-05-26 19:52
    jazzed wrote: »
    What I mean is that the compiler and VM would just be one very important part of a larger package. Other parts would be necessary such as a linker, linker scripts, and other common binary utilities. I know your binder does lots of the linker function, but having to constantly add platforms and memory models is a little difficult. GCC has lots of flexibility that presumably many ParallaxSemiconductor target customers have come to expect. IMHO Catalina should also be capable of such things.

    Hi jazzed,

    We seem to have gotten sidetracked onto other issues (yes, I know - my fault :smile:). Going back to your earlier post (above) - perhaps we can explore this a bit further?

    First, I should clear up some misconceptions about Catalina ...

    Catalina's catbind utility is a linker - it is a source-level linker. It is language agnostic and platform independent. But of course it needs to know about any new memory layouts, since (like any linker) it is the component that actually builds the final executable binary. So far, even thought the memory models on the Propeller are quite complex (mainly due to the various types of XMM supported) everything necessary can currently be specified using one command-line option (-x). I actually added some other command line options (-P, -R) in anticipation of more complex needs - but so far I have found no need to actually use them even though I support about a dozen different XMM-enabled Propeller platforms (many of them using completely different XMM memory types and architectures). For non-XMM propeller platforms, the memory layout requirements are so trivial as to require no special consideration - although I am thinking of adding another option to simplify the simultaneous execution of SPIN and C.

    On the other hand, if you look at the GNU ld program, you will find massive amounts of platform dependent and binary format dependent stuff having to be specified via complex and arcane linker scripts. None of this is required for the Propeller - unless you want to support complex binary and object formats (COFF, ELF etc) for some reason. If you want these formats then fine, but lets not get the cart before the horse here - the GNU binutils are required because of the complexity of the GNU binary and object formats - not the other way round. Catalina uses the simple Parallax binary format, and therefore doesn't need them.

    Are you proposing to adopt ELF as the standard object format for the Propeller? That's fine for Linux users, but it certainly makes the Propeller very Linux-centric (whereas currently it is Windows-centric). It will require Windows users to install one or the other of the various GNU "enabling" technologies (e.g. MinGW, Cygwin). This is a significant shift for Parallax, and one that could be detrimental to their hobbyist appeal (hobbyists probably don't give two hoots about file formats - and why should they?).

    I'm not exactly sure what other utilities you think you need. Someone (can't find the thread now) mentioned make. I use make all the time to build Catalina programs - works a treat! Or do you mean utilities for library management (ar, ranlib etc). Same argument as above applies - i.e. these are primarily required because of the complexities of the GCC object formats. In Catalina, the catbind program does those functions too. Library management is actually much easier when everything (including your libraries) is in source format!

    The part of Catalina that does change with each new platform is not the compiler or the linker - it's the target. I do make life somewhat difficult for myself by having a combined "standard" target package - but this does actually save me lots of work, as it makes it easier for me to support all the platforms I currently do. In GCC I suppose you would instead provide a separate library of drivers for each supported platform. You could do this with Catalina if you wanted. It would make each user's life easier (but make my life harder!).

    One final question (related to the issue of multiple targets). Currently it is very easy for me to take an OBEX driver and generate a version suitable for use with Catalina - e.g. Keyboard, Mouse, SD Card, Video (VGA, TV), Serial Comms, Floating Point, Graphics libraries etc etc. It's not completely trivial - but it's also not very difficult, taking only minutes or hours. With GCC, you will presumably have to re-develop all these, for every platform. Much of the synergy that could exist between SPIN and C users of the Propeller will harder to achieve - unless Parallax intends to make the new Propeller Tool also use ELF. Is this the case?. Has anyone even given this any thought?

    Ross.
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2011-05-26 22:57
    The first command line tools I was thinking of were: a compiler, a linker, and a downloader. Others here probably have a better idea of what will ultimately be needed.

    Also, considering that initially Parallax is only concerned with doing the GCC stuff for Prop 2, we'll have to write support objects for all the Prop 2 ways of doing things whether it's spin, pasm, C, or whatever...
  • Kevin WoodKevin Wood Posts: 1,266
    edited 2011-05-26 23:20
    >>> I would go further and say that NO ONE uses the full C++ language as Wikipedia defines it other than maybe some academics.

    I would go further still and say that NO ONE uses EVERY feature of ANY language in any given project, regardless of the targeted processor.

    Regarding the Arduino... I think to be objective in this discussion, people need to forget about the "Arduino" environment, and keep in mind that the current standard Arduino module is simply a carrier board containing an Atmega328 pre-programmed with a bootloader. So technically, the "Arduino" supports anything that supports the Atmega328. There's no reason an Arduino user can't skip the Arduino environment altogether and go straight to AVR Studio, Rowley CrossWorks, etc.

    I'm not trying to start another Propeller vs. Arduino debate. However, IMO, if people really want the Propeller to be taken seriously outside of these forums, they need to lay off the NIH rhetoric and start approaching things in an open and unbiased manner.

    Sure, you can keep the Propeller as your favorite, but continually bashing the choices of others is not going to make people feel welcome to these forums. I've never seen one Parallax representative come on here and flame people because they chose this, that, or the other.

    It's fairly obvious that Parallax is making an effort to move their business in new directions, directions which people may or may not agree with. It's also obvious that they've asked for help in doing this. So maybe a little less bickering and a little more forward thinking will help them get there.

    tl;dr - Can't we all just get along? :)
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2011-05-26 23:35
    I posted another reply to the Propeller II blog post in the prop forum with details abot the new pointer registers and instructions.
    http://forums.parallax.com/showthread.php?125543-Propeller-II-update-BLOG&p=1003759&viewfull=1#post1003759

    I'm wondering should I just start new threads over here with these new instruction details, or continue adding them over there? If the former, then should I do it for the two posts I have already made over there? It doesn't really matter where the info gets posted since anyone can read the posts wherever they are...
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-05-26 23:49
    Roy,

    I don't know that it matters where it gets posted. We'll find it -- wherever -- and be grateful just to have it! :)

    Thanks, man!
    -Phil
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2011-05-27 00:21
    I had to make some changes to that post, I had forgotten an important detail, and also messed up the WRxxxx parameters. So go back and look again, if you saw that post before 12:20am on 5/27/2011.
  • RossHRossH Posts: 5,519
    edited 2011-05-27 03:14
    Roy Eltham wrote: »
    I'm wondering should I just start new threads over here with these new instruction details, or continue adding them over there? If the former, then should I do it for the two posts I have already made over there?

    Hi Roy,

    Yes please! I'd much prefer you started a new thread, and kept an up to date document attached to the first post that could be referred to by all. Some people probably manage to read all the threads on these forums, but others (like me) only really get time to keep up with one or two threads - and even then, trawling through a thread hundreds of entries long to find the detail you're looking for can be trying - especially with a forum search function that doesn't seem all that useful at finding anything at all.

    Ross.
  • SapiehaSapieha Posts: 2,964
    edited 2011-05-27 04:21
    Hi Roy.

    I'm with Ross.
    And one addition In First post GIVE description - That this thread is only for INFO and discussions on INSTRUCTION set - all other post will be deleted.

    That give us clean INFO thread on Propeller II instructions.

    RossH wrote: »
    Hi Roy,

    Yes please! I'd much prefer you started a new thread, and kept an up to date document attached to the first post that could be referred to by all. Some people probably manage to read all the threads on these forums, but others (like me) only really get time to keep up with one or two threads - and even then, trawling through a thread hundreds of entries long to find the detail you're looking for can be trying - especially with a forum search function that doesn't seem all that useful at finding anything at all.

    Ross.
  • SapiehaSapieha Posts: 2,964
    edited 2011-05-27 04:27
    Hi Roy.

    I'm with Ross.
    And one addition In First post GIVE description - That this thread is only for INFO and discussions on INSTRUCTION set - all other post will be deleted.

    That give us clean INFO thread on Propeller II instructions.

    RossH wrote: »
    Hi Roy,

    Yes please! I'd much prefer you started a new thread, and kept an up to date document attached to the first post that could be referred to by all. Some people probably manage to read all the threads on these forums, but others (like me) only really get time to keep up with one or two threads - and even then, trawling through a thread hundreds of entries long to find the detail you're looking for can be trying - especially with a forum search function that doesn't seem all that useful at finding anything at all.

    Ross.
  • AntoineDoinelAntoineDoinel Posts: 312
    edited 2011-05-27 13:52
    A bit of an update...

    Paging LMM2:

    Essentially paging in large blocks of code, executing within the cog, and fetching the next block when it branches out of scope or calls a function. Cluso99 and some others on the forum have also tried this approach I believe as it is an obvious alternative to fetching one instruction like LMM1 does. I really liked PhiPi's reverse loader variant.

    This works great on small trivial code, returning great benchmark results, but fails horribly on large programs due to thrashing when the locality of reference is not ridiculously high.

    Bill, has it ever been considered a lazy JIT approach for jumps translation in this cache model?

    I mean, initially LMM jumps are pointing to special evaluation functions, those will replace inside jumps with native, and outside jumps with the real LMM far jump code...
  • Bill HenningBill Henning Posts: 6,445
    edited 2011-05-27 14:06
    It's not the jumps that hurt, its the constant paging :-(
    Bill, has it ever been considered a lazy JIT approach for jumps translation in this cache model?

    I mean, initially LMM jumps are pointing to special evaluation functions, those will replace inside jumps with native, and outside jumps with the real LMM far jump code...
  • AntoineDoinelAntoineDoinel Posts: 312
    edited 2011-05-27 14:34
    It's not the jumps that hurt, its the constant paging :-(

    Hmmm... to make things a little better wrt paging, my original ramblings on the subject (directed at Prop I) also included:
    - splitting the cache in two, "sort of" L1 2-way associative (the checks for half-to-half would be done in long jumps anyway)
    - some stats collecting in the burst refill, to (temporary?) revert to 4-way unrolled LMM when thrashing gets too high
    - all this on top of L2 unified code/data cache via VMCOG

    On paper it looked promising, 20 cycles on pure linear code, much faster on small loops... but I guess you're right, the worst case makes it a dead end. :depressed:

    Well, on the bright side, my 100th post... only 17400 left to catch Mike Green! :lol:
  • jazzedjazzed Posts: 11,803
    edited 2011-05-27 16:50
    Ross,

    Please don't take any of my reply here badly. We have to get through all this. You have given a super human effort with Catalina. Others and I all certainly appreciate it.

    I will be porting Flash and SDRAM code to Catalina this summer because Propeller 1 will likely never see an efficiently running version of a GNU/GCC toolchain. My ports won't be easy to finish, but anything worth doing with long term value rarely requires zero effort.
    RossH wrote: »
    Catalina's catbind utility is a linker - it is a source-level linker. It is language agnostic and platform independent. But of course it needs to know about any new memory layouts, since (like any linker) it is the component that actually builds the final executable binary.

    I have a new platform that uses Flash for code and constants and HUB for everything else. What will it take to add support to Catalina for that?

    With a generic linker that uses .ld scripts to define the memory layout, it is not a problem.

    Constant churn of tools is bad for customers and developers because "churn" is not maintainable. A general solution is best. A professional tools paradigm already exists and should be provided for one way or another to ParallaxSemiconductor customers.
    RossH wrote: »
    ... wrote:
    What would it take to make a MEDIUM module where code and cnst are in flash and init and data are in hub memory?
    My sanity!
    Why should your sanity be at stake because someone wants another memory model?
    I'm not trying to embarrass you or anything.
    The GNU/GCC toolchain model doesn't sweat it, why should you?
    RossH wrote: »
    The part of Catalina that does change with each new platform is not the compiler or the linker - it's the target.
    Hmm, did I misunderstand the context of the quote above?
    RossH wrote: »
    In GCC I suppose you would instead provide a separate library of drivers for each supported platform.
    There is no need to provide separate library drivers for each platform. A single per-platform library can be used to define the hardware. This is of course vastly easier with more advanced languages.
    RossH wrote: »
    You could do this with Catalina if you wanted. It would make each user's life easier (but make my life harder!).
    I'm afraid you've set yourself up for all recurring this hard work with your implementation and usage model. We want a package that anyone of us can easily maintain. If you get hit by a bus, how would affect the rest of us?
    RossH wrote: »
    One final question (related to the issue of multiple targets). Currently it is very easy for me to take an OBEX driver and generate a version suitable for use with Catalina - e.g. Keyboard, Mouse, SD Card, Video (VGA, TV), Serial Comms, Floating Point, Graphics libraries etc etc. It's not completely trivial - but it's also not very difficult, taking only minutes or hours. With GCC, you will presumably have to re-develop all these, for every platform. Much of the synergy that could exist between SPIN and C users of the Propeller will harder to achieve - unless Parallax intends to make the new Propeller Tool also use ELF. Is this the case?. Has anyone even given this any thought?
    Phil already asked this question in this thread. This has been done for years already. Such library code lives in the OBEX now. Ports have already been done for ZOG for several drivers with almost zero effort. It's a piece of cake. Drivers should be reviewed like anything else for functionality and applicability.
  • RossHRossH Posts: 5,519
    edited 2011-05-27 21:13
    jazzed wrote: »
    Ross,

    Please don't take any of my reply here badly. We have to get through all this. You have given a super human effort with Catalina. Others and I all certainly appreciate it.
    Don't worry jazzed, the only time I get annoyed is when people try to argue a case without having made even a rudimentary attempt to gain the necessary background knowledge.
    jazzed wrote: »

    I will be porting Flash and SDRAM code to Catalina this summer because Propeller 1 will likely never see an efficiently running version of a GNU/GCC toolchain. My ports won't be easy to finish, but anything worth doing with long term value rarely requires zero effort.
    Agreed on all counts.
    jazzed wrote: »

    I have a new platform that uses Flash for code and constants and HUB for everything else. What will it take to add support to Catalina for that?

    With a generic linker that uses .ld scripts to define the memory layout, it is not a problem.
    If you think so, then it may be that you don't fully understand the problem yet. I'll come back to this later.
    jazzed wrote: »

    Constant churn of tools is bad for customers and developers because "churn" is not maintainable. A general solution is best. A professional tools paradigm already exists and should be provided for one way or another ParallaxSemiconductor customers.
    No disagreement from me. But most of the "churn" I think you are talking about arises from the fact that Catalina supports so many different XMM memory implementations and load options. The compliation, code generation and linkage components of Catalina have been extremely stable for quite a while. For instance, (apart from a couple of minor bug fixes) I have not needed to change either the debugger or the code optimizer for something like the last four or five releases - it's all been about new loaders (e.g. Catalyst) and new XMM boards (e.g. the C3).

    I could have reduced the "churn" on the XMM support simply by electing not to support some of the more bizarre XMM implementations (such as that used on the C3) - but I decided that would be both counter-productive and also make Parallax look quite silly with regard to their flagship demonstration board.

    With regard to the "churn" on the loaders - even with the very simple Parallax binary format, loading these programs into the Propeller is a pain. I can just imagine how much worse it will be to have and do it for a complex format. Most PC users simply ignore how difficult it can be to load complex binaries (like COFF or ELF). That's because on processors where the loader program itself can be kilobytes or even megabytes in size, and the memory model is trivially simple, it can be done reasonably quickly. Wait till you have to try doing it in 496 instructions, when the memory and the hardware techniques you have to use to access it are quite bizarre!

    Finally, a certain amount of "churn" is inevitable on open source projects (the same thing happens on closed source projects as well, of course - you simply don't get to see it). But this raises an interesting point - perhaps Parallax really shouldn't be thinking about doing such an important project this way. It might be a whole lot better for them if they simply hired a professional GCC porting company that would guarantee do the whole job for them in a few months. There are several such companies - I already looked them up (Google is your friend here).
    jazzed wrote: »

    Why should your sanity be at stake because someone wants another memory model?
    I'm not trying to embarrass you or anything.
    You'd have to try harder than this :smile:.
    jazzed wrote: »
    The GNU/GCC toolchain model doesn't sweat it, why should you?
    The problem has nothing to do with the tools - it has to do with the nature of the addressing schemes you have to use to efficiently address both XMM RAM and internal Hub RAM at the same time. I'm slightly tempted to just leave you to find these things out for yourself (then I could say "I told you so" later!), but instead I'll compromise by giving you a big clue - things are fine and dandy provided all your data addresses use the same addressing scheme, and all your code addresses use the same addressing scheme (which may be different schemes, or may be the same scheme). However, things start to fall apart badly when you try to have data (or code) addresses that use two different addressing schemes depending on whether the address is in Hub RAM or XMM RAM. It can be done - but I never figured out a way to do it that was not ruinously inefficient. And as soon as it became less efficient than just having all data and code in XMM RAM, then what's the point of it? Who would use it?
    jazzed wrote: »

    Hmm, did I misunderstand the context of the quote above?
    Yes.
    jazzed wrote: »

    There is no need to provide separate library drivers for each platform. A single per-platform library can be used to define the hardware. This is of course vastly easier with more advanced languages.
    I don't understand the difference between "a separate library for each platform" and "a single per-platform library"?

    How are you planning to accomodate things like a HiRes VGA driver that requires 3 cogs on the Morheus, versus 2 cogs on every other platform, or the keyboard and mouse drivers that use different hardware on the Hydra from every other platform. I think you may be underestimating the scope of the problem here.
    jazzed wrote: »
    I'm afraid you've set yourself up for all recurring this hard work with your implementation and usage model. We want a package that anyone of us can easily maintain. If you get hit by a bus, how would affect the rest of us?
    Hmm. Isn't this the point of "open source"?

    As I have said several times before - I do things a certain way in Catalina because it saves me time as the number of platforms I wanted to support grew ever larger. In particular, I am interested in saving time on the release process, so that I can spend more time on the development process. I've got it to the point where I can add a new platform in minutes. So can anyone else who takes the time to learn how (and a few people have done so). Of course, in cases where the new platform requires a completely new set of basic drivers, it takes longer.
    jazzed wrote: »

    Phil already asked this question in this thread. This has been done for years already. Such library code lives in the OBEX now. Ports have already been done ZOG for several drivers with almost zero effort. It's a piece of cake. Drivers should be reviewed like anything else for functionality and applicability.

    Well, I have to say I simply can't understand your answer here, which makes me think that you may not have understood my question. You refer to Phil, but I've looked back through this thread and can't see where this particular issue is addressed by him. Also, you mention ZOG, but as far as I know (apologies in advance if I am misrepresenting anything, Heater!) ZOG includes only the FullDuplexSerial serial driver, and appears to use a modifed version of only the PASM components of that driver anyway - just like Catalina does.

    Are you saying that GCC will be able to import existing OBEX drivers written in SPIN, and have them callable from C? If so, then I'm impressed, and I look forward to seeing how you achieve it (if it works then I would consider adopting it for Catalina!).


    Thanks for taking the time to respond to my post in such detail, and I really do look forward to more discussions. But I am also happy to take this particular discussion to a different thread if you want your GCC thread not to constantly get bogged down with discussions about Catalina.

    I have just had some email discussions with Michael Park, and I can confirm that I will be adding Prop II support to Homespun as soon as details of the instructions become available - so there will be a Catalina port for the Prop II. However, I am also happy to contribute lessons learned from my experience with Catalina to your GCC effort.

    Ross.
  • Heater.Heater. Posts: 21,230
    edited 2011-05-28 02:24
    RossH and all,

    Damn interesting conversation.

    Ross, I'm a bit lost as to what actually a binder is. The idea of a traditional linker is clear enough but what does it mean to be linking at the source code level?. Is it so that effectively all the source modules get mashed into one huge source that is compiled all at once? Or what.
    ZOG includes only the FullDuplexSerial serial driver, and appears to use a modifed version of only the PASM components of that driver anyway

    Well yes. Except that normally when I run Zog all I/O goes through Spin via a mailbox interface. In that way C an Spin code can live together. I did pull out the PASM of FullDuplexSerial for use directly from C, also Lonesock's float object, with the idea of enabling a C only system. It has a long way to go and I'm never going to be in the business of churning out drivers.
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 03:23
    Heater. wrote: »
    RossH and all,

    Damn interesting conversation.

    Ross, I'm a bit lost as to what actually a binder is. The idea of a traditional linker is clear enough but what does it mean to be linking at the source code level?. Is it so that effectively all the source modules get mashed into one huge source that is compiled all at once? Or what.



    Well yes. Except that normally when I run Zog all I/O goes through Spin via a mailbox interface. In that way C an Spin code can live together. I did pull out the PASM of FullDuplexSerial for use directly from C, also Lonesock's float object, with the idea of enabling a C only system. It has a long way to go and I'm never going to be in the business of churning out drivers.

    In the version of ZOG I was working on as part of zogload I ported a number of the drivers that jazzed made work with ICC to ZOG. They load the PASM part of the code directly from C code and also communicate with the PASM COGs using C. Once ZOG is started the Spin COG is no longer needed. In fact, I start ZOG using coginit rather than cognew unless I'm doing debugging (based on your run_zog model).
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 03:28
    RossH wrote: »
    With regard to the "churn" on the loaders - even with the very simple Parallax binary format, loading these programs into the Propeller is a pain. I can just imagine how much worse it will be to have and do it for a complex format. Most PC users simply ignore how difficult it can be to load complex binaries (like COFF or ELF). That's because on processors where the loader program itself can be kilobytes or even megabytes in size, and the memory model is trivially simple, it can be done reasonably quickly. Wait till you have to try doing it in 496 instructions, when the memory and the hardware techniques you have to use to access it are quite bizarre!

    Just because you use either ELF or COFF as an object file format does not mean that you necessarily have to use it as an executable format. The final step of a link could be to convert to a simpler format suitable for the loader. You only really need the more complex ELF or COFF formats when you are handling relocatable files. Once every address is absolute a simpler format works just fine. The only exception I can think of to this is dynamically linked libraries and I don't think we're planning on supporting those on the Propeller just yet.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 05:16
    Heater. wrote: »
    Ross, I'm a bit lost as to what actually a binder is. The idea of a traditional linker is clear enough but what does it mean to be linking at the source code level?. Is it so that effectively all the source modules get mashed into one huge source that is compiled all at once?

    Yes, that's correct. A 'binder' (my own term, since - as far as I am aware - this is unique) does the same job as a linker - but at source level, not object level. This makes it a whole lot easier to implement (a few hundred lines of C code).

    Normally all this detail is hidden from you if you use a single Catalina command to do the whole job. To to it in the more 'traditonal' compile-then-link manner, try this:
    cd C:\Program Files\Catalina\demos
    
    catalina -c my_func.c                    <---- compile my_func.c to my_func.obj
    catalina -c my_prog.c                    <---- compile my_prog.c to my_prog.obj
    catbind.exe -a -lc my_prog.obj my_func.obj -o bound.obj   <---- bind the objects together
    
    Now, using any text editor, examine the files my_func.obj and my_prog.obj - with Catalina the .obj files are simply stand-alone PASM versions of the corresponding C files, which could be compiled with any PASM compiler. Except that if you do so without first 'binding' them, you will get an error about the unresolved symbols in those files.

    Then examine the file bound.obj. This is the final ('bound') version of these files - it contains not only all the PASM code from the .obj files, but also all the necessary PASM source for all the library references those files need. The code that needs to be included from the library are determined by a simple text parse of the PASM .obj files - then the source for each of those routines is extracted from the libc library files (which are also just PASM source files) then - using your term - 'mashed' into one file.

    The bound.obj file is a platform-independent PASM version of the whole C program, including all library functions This file can be compiled with any suitable SPIN compiler (e.g. homespun, bstc, or the Parallax Propeller tool) and will not generate any errors - all the symbols have been resolved. But at run time, this file expects to be loaded and run in an ANSI C environment, which is what each target (which is just a SPIN program) provides - i.e. a common environment for each particular Propeller platform.

    Ross.
  • RossHRossH Posts: 5,519
    edited 2011-05-28 05:20
    David Betz wrote: »
    Just because you use either ELF or COFF as an object file format does not mean that you necessarily have to use it as an executable format. The final step of a link could be to convert to a simpler format suitable for the loader. You only really need the more complex ELF or COFF formats when you are handling relocatable files. Once every address is absolute a simpler format works just fine. The only exception I can think of to this is dynamically linked libraries and I don't think we're planning on supporting those on the Propeller just yet.

    Quite true. Feel free to adopt Catalina's 'extended' binary format :)

    Ross.
  • David BetzDavid Betz Posts: 14,516
    edited 2011-05-28 06:17
    Ross,

    What kind of debugging information do you include in your object files and the final "bound" files? Do you include globak, file local, and local symbol names and types, function parameter names and types, struct and union definitions, enum definitions, etc. that are needed by GDB?

    Thanks,
    David
  • Heater.Heater. Posts: 21,230
    edited 2011-05-28 06:33
    RossH,

    Ah, with you now. In a nutshell, the compiler produces one PASM for each C file.The binder glues all the PASM files together in a single PASM thus resolving all the symbols. The assembler produces the binary from that PASM file. I was stuck on your statement about "source level" linking, I didn't think about assembler source level. Silly me. It had occurred to me to use exactly that technique with my toy TINY compiler that produces LMM. Writing a linker just seemed like one of those chores in life that I never want to do:) However nobody would want to write a big program in TINY that required multiple source files.

    So then the question was, how would one arrange to have data in different memory areas, HUB, external RAM or a ROM say? In that final PASM file everything is thrown in together. Is that not a problem with this binder method?

    Previously:
    The problem has nothing to do with the tools - it has to do with the nature of the addressing schemes you have to use to efficiently address both XMM RAM and internal Hub RAM at the same time
    I can see that in an LMM/XMM loop one would totally cripple the thing by having to check where the data is all the time and make the correct memory access. Luckily the overhead of doing that in the Zog interpreter loop is pretty small:)

    P.S. I thought you were into Ada. The GNAT Ada compiler uses a "binder" however that is a totally different process. http://gcc.gnu.org/onlinedocs/gnat_ugn_unw/Binding-an-Ada-Program.html#Binding-an-Ada-Program
Sign In or Register to comment.