Catalina does not include debugging information in the executable. If you use the -g option the debugging information (standard stabs format) for each file is included in a separate file with a .debug extension. For an example (following on from the example in my earlier post) try:
catalina -c my_func.c -g
Then examine the resulting my_func.debug file.
Bob Anderson wrote a utility to parse all the individual .debug files and build a consolidated symbol file for use by the BlackCat and BlackBox debuggers, but this format is custom - for other debuggers, you may be better off just using the orignal stabs files. The stabs format (which is compatible with GDB type debuggers) is described here. Of course, you need a debugging kernel to be able to use the symbol information at run-time, but Catalina also provides one of those (which is automatically selected when you use the -g option).
So then the question was, how would one arrange to have data in different memory areas, HUB, external RAM or a ROM say? In that final PASM file everything is thrown in together. Is that not a problem with this binder method?
No - the Catalina code generator prefaces all source code with appropriate notations to tell the binder which segment the succeeeding statements should go in. These notations are respected by the binder, which shuffles all the source about as required for the selected memory layout. So two source lines that may have been adjacent in the unbound files may end up in different segments in the bound files.
I can see that in an LMM/XMM loop one would totally cripple the thing by having to check where the data is all the time and make the correct memory access. Luckily the overhead of doing that in the Zog interpreter loop is pretty small:)
Yes - I didn't mean the term 'binder' was original - just how it does what it does. I knew of the Ada usage of the term, and maybe I was influenced by that - but in fact I chose the term because I like the book 'Lord of Light' by Roger Zelazny!
But most of the "churn" I think you are talking about arises from the fact that Catalina supports so many different XMM memory implementations and load options.
Didn't you have to add a new XMM mode to support code in C3 flash with data in C3 SRAM and locals/stack in HUB?
Finally, a certain amount of "churn" is inevitable on open source projects ... It might be a whole lot better for them if they simply hired a professional GCC porting company that would guarantee do the whole job for them in a few months.
Perhaps this is true. The product would still need to be sufficiently defined because of Propeller's differences though. My task is to help Parallax get what it wants that will please ParallaxSemiconductor customers.
The problem has nothing to do with the tools - it has to do with the nature of the addressing schemes you have to use to efficiently address both XMM RAM and internal Hub RAM at the same time.
We've already been through this. Code space (.text) is a single segment, so there is no decoding required there. Data as you mention is more complex in external memory because of having to choose the right space with a test and jump. The hit however is not significant in the bigger picture.
(This performance hit is actually one reason that I'm working with Flash to provide external code space. There are no such decisions required with local/global/stack variables all in HUB. Other reasons are small footprint solution DIP32 and very low cost).
I don't understand the difference between "a separate library for each platform" and "a single per-platform library"?
Sorry I was not precise enough. Common driver libraries such as Serial, TvText, VgaText, filesystem, etc... would use per-platform definitions. For example in Zog the libsrc/ modules do not have any specific platform attributes, but the attributes are defined by the include/ build*.h modules.
During the make process, the build.h is replaced by the platform's build.h. The same approach can be used with a GUI for customers to define the pins on their product. This is a simple example, and some things would need improved to make it more generally usable.
How are you planning to accomodate things like a HiRes VGA driver that requires 3 cogs on the Morheus, versus 2 cogs on every other platform, or the keyboard and mouse drivers that use different hardware on the Hydra from every other platform.
I'm not really sure why you consider this problem. It is no different than allowing a software architect the flexibility of using the toolchain. A toolchain is not responsible for drivers. It is responsible for enabling a system.
I'm sure the platform makers you support appreciate your making it easy for users to deal with their boards, but this is something you decided to do because your a nice guy and wanted market share. Today you have to worry about testing all of it for a release.
As I have said several times before - I do things a certain way in Catalina because it saves me time as the number of platforms I wanted to support grew ever larger. In particular, I am interested in saving time on the release process, so that I can spend more time on the development process. I've got it to the point where I can add a new platform in minutes. So can anyone else who takes the time to learn how (and a few people have done so). Of course, in cases where the new platform requires a completely new set of basic drivers, it takes longer.
If Parallax chooses to support built-in variations of everyone's boards, they will eventually have to hire a full staff of software developers and testers. This will not happen
Are you saying that GCC will be able to import existing OBEX drivers written in SPIN, and have them callable from C?
The current C drivers use PASM chunks with C library wrappers. The latest Zog is more mature than Heater's last version although I would do some things differently. You can look at the latest source here: http://code.google.com/p/propeller-zpu-vm/source/browse
But I am also happy to take this particular discussion to a different thread if you want your GCC thread not to constantly get bogged down with discussions about Catalina.
It's fine here. Catalina's LCC and VM with a professional tool chain methodology is a leverageable alternative in the unlikely event that porting GCC is not feasible. In any case, the discourse is good for the current community.
I do however question the decision to allow this thread to live forever in the public eye where ParallaxSemiconductor customers may simply decide that "they're all nuts" and move on to the next option.
I do however question the decision to allow this thread to live forever in the public eye where ParallaxSemiconductor customers may simply decide that "they're all nuts" and move on to the next option.
Sorry as I must say that
Hide that discussion - In my opinion is Hide from community that PropII and Tools to it are made only for some People that have possibility to do - And community need be happy with what them GET!
I for one would be very happy to see this fascinating debate continue in this thread or perhaps series of threads.
Development of the Linux kernel goes on under public gaze and there have been some "interesting" debates in that space. Apparently without causing the users to decide that all the kernel devs are nuts and moving on to some other OS. So I would not worry too much about that aspect.
Hide that discussion - In my opinion is Hide from community that PropII and Tools to it are made only for some People that have possibility to do - And community need be happy with what them GET!
I understand fully. And if you clearly understood what I said, you would also understand that I do not mean the discussion should be removed today. Your rights and everyone else's right to participate will be preserved I'm sure.
Some things will definitely be done in private though because it's not any of our business how some things are done
Sometime in the future when it is all said and done, the thread should probably be removed so that potential paying customers are not "put off" by all the weird and crazy things that come up. As far as I'm concerned we are really not doing Parallax any favors by providing competitive advantage to other companies by exposing architectural weaknesses. Of course if weaknesses could be fixed, then that would be a benefit, but at what cost?
All these judgments are for Parallax, but I guarantee they will not "argue" with you or anyone else about them on the forums.
Sometime in the future when it is all said and done, the thread should probably be removed so that potential paying customers are not "put off" by all the weird and crazy things that come up.
By that reasoning one would want to delete pretty much the entire Propeller forum:)
I see no need to remove the thread, or hide any discussion that has happened so far.
It's honest, educational, and there is genuine ambiguity as to how these things are best resolved. Where that's true, debate and some development efforts will add a lot of value. That value can be easily seen by a lot of people. A few might see the discussion and move on, but then again, a few would not see the discussion, wonder why it is the way it is and move on too. There is no strategy that pays off here, so negate it, and focus on the things that do pay off.
That's my take. Remember, the design of the Propeller is actually NEW. Debate of this kind if not only warranted, but necessary, if the NEW design is to be properly boot strapped into a state that can be used by many people, not willing to do the comp-sci work to make that happen. That gets ignored in these discussions far more than I think is healthy.
Re: Supporting boards. IMHO, Parallax isn't going to do it, but those that make the boards will. The idea that can be done with code surrounding that effort working in a platform independent way is a powerful thing, not so easily dismissed. A heirachy of binaries and dependencies, as suggested here, can yield the same result, but at the cost of the end user potentially having to know and manage more information.
Re: Competitive. I do not believe masking any weakness makes longer term sense. To buy that, requires that we also believe that ordinary users, or prospective users aren't willing or able to do the work to vet their choices. That's not realistic. Far better to invest in promoting strengths, as those are the differentiators of value, and thus, the primary means by which we have to promote product selection and adoption, where said selection and adoption has a high chance of seeing success.
eg: Assembly language programs are limited to 500 words!!! This is easily positioned as a core weakness. (and will be)
8 concurrent CPUs, each with hardware virtualization, or supervisory capability permitting multiple assembly language programs to run along side high level ones, concurrently, etc... is a strength that should be put out there in many different forms.
So then, consumer looks at the silly COG statement, then reads the properly positioned strength and loses a bit of respect for whoever was silly enough to demote a nice chip on arguably impotent information.
This happens all the time. Anyone who bites on the silly COG limit, wouldn't select the chip anyway, due to other biases that would prevent adoption in the first place. Easily ignored and factored out. Not a prospect.
ergo: Hiding that stuff does one no good. Promoting the real value propositions does all the good in the world possible.
The internet is a free country if one lives in a free country.
Just because we have different beliefs doesn't necessarily mean they are wrong. They are just different.
It was not my intention to create some kind of side-show; it was not originally my idea anyway,
I just happen to agree. At this point I certainly feel like the recipient of a sucker punch though.
I am largely happy with what is on the table. Chip knows how to balance this stuff well. And, there is always the means and methods that will be discovered after we've used the chip for a while. Just like Prop I, Prop II will operate in some very clever ways, not understood right now.
Given that, I favor as simple as possible, as robust as possible in the instructions, with LMM execute being a focus, just because that's known and will matter. Bill seems to think a high percentage of native PASM is possible in a LMM style program. That is great. I'll take it.
Too many distractions, or pulls to focus on specific attributes will impact the balance on the silicon, potentially limiting the scope of potential uses. I don't think that's a good idea. Everything costs something, and those costs really won't be known until after this next synthesis, IMHO.
I am largely happy with what is on the table. Chip knows how to balance this stuff well. And, there is always the means and methods that will be discovered after we've used the chip for a while. Just like Prop I, Prop II will operate in some very clever ways, not understood right now.
Given that, I favor as simple as possible, as robust as possible in the instructions, with LMM execute being a focus, just because that's known and will matter. Bill seems to think a high percentage of native PASM is possible in a LMM style program. That is great. I'll take it.
Too many distractions, or pulls to focus on specific attributes will impact the balance on the silicon, potentially limiting the scope of potential uses. I don't think that's a good idea. Everything costs something, and those costs really won't be known until after this next synthesis, IMHO.
Surely chip has seen that right? Well, if he adds it, he adds it. If not, then not. Again, I'm largely happy with the current state of things, and would prefer to NOT engage in more "add this instruction" advocacy than has currently happened.
(back to the GCC dev tool discussion for me)
I really don't want to contribute to some "but we need this instruction" debate here. There are other threads for that. My primary point was that masking weaknesses isn't really all that productive, not to start some attempt to build away those weaknesses. That's a different discussion.
If you have time, I would really like to see your "logos argument" regarding pros/cons for GCC dev tools. I guess that's asking for a position statement, but in the current discourse having any opinion with good justifications are appreciated.
Please explain "logos argument" to me, and I'll do my best.
Just FYI, I have NO skin in this game, in that I currently am happy with SPIN + PASM. I have however used gcc and friends on other kinds of work, and would love to see a environment that competes well with SPIN + PASM. The discussion so far is a REALLY GOOD ONE on all sides, and it must happen, or we cannot properly spec the effort desired by Parallax. That's why I bothered to comment as I did. Edit: I also see where Catalina got to, and it's significant.
Whatever is done here, it's got to work nicely, and leverage the chip properly, and that's my other interest for commenting. I want to use it with Prop II, because the scale of that one will warrant doing so.
I'm going to re-read this thread now. I may have a few questions before positioning...
Please explain "logos argument" to me, and I'll do my best.
One of Aristotle's modes of argumentation meaning essentially a "discourse of reason" or an appeal to reason (logic) rather than to an appeal to emotion or credibility.
Does the potential scope of this effort include modifications to GCC, as in doing a commit to the source tree and maintaining that for the propeller over time?
Does the potential scope of this effort include modifications to GCC, as in doing a commit to the source tree and maintaining that for the propeller over time?
Potentially, yes. It is after all GPL. Any actual commit would likely be coordinated according to product readiness, commit window, and other factors. I don't know the full impact of being on "the train" itself. One of the GCC developers will have more detailed answers.
If so, then that's enough. If not, have we made decisions as to what is GPL, MIT, or closed?
Assuming GCC on Propeller 2 is feasible, the compiler and tool-chain will of course be GPL.
Sources that use LGPL libraries can be MIT or otherwise. I'm pretty sure OpenSource is the direction.
Anyone think about harvard vs modified-harvard vs von-newmann CPU support and GCC?
I'm asking because dealing with the separate address spaces found in a Prop is going to require potentially significant additions, or modifications to gcc. IMHO, this won't be a trivial exercise.
Secondly, there are a coupla ways this can go: (probably more, but I see these right away)
Native PASM programs limited to the COG, and LMM programs, which require a kernel be built, loaded, etc... which then executes the program itself, which will be built to operate with A kernel, not kernels in general, IMHO. How this is done is a core discussion that would significantly impact the merits of gcc.
Anyone think about harvard vs modified-harvard vs von-newmann CPU support and GCC?
If you mean separate -vs- unified memory models, this is part of the linker problem. Some form of LMM can handle any of these. We already have several implementations in Catalina and ZOG.
This is unlikely. It would be really nice to emit pure PASM for writing and compiling COG code. Bean's PropellerBasic demonstrates how nice the idea is for example.
and LMM programs, which require a kernel be built, loaded, etc... which then executes the program itself, which will be built to operate with A kernel, not kernels in general, IMHO.
An LMM type VM similar to ICC and Catalina with enhancements from Bill's research is probably the only way to go - although how far to go is murky at this point. There may be some other options, but that seems the most reasonable. Bill has several posts on LMM enhancements - referring to them by numbers is probably a good idea.
It would be wonderful to have multiple VMs for more threads (I preferred VM over kernel since kernel has more of an O/S connotation ... LMM is a VM like it or not).
One question that has not been asked which is obvious to me at least is whether or not multiple threads of execution are possible in one COG. Bill produced a multi-threaded LMM which I think Ross adopted. Ross was kind enough to make his thread library very similar to the GCC pthread library.
Is Prop I a consideration, or not? Put another way, would we trade optimal functionality in Prop II, for the ability to also build for Prop I?
Prop 1 is a similar machine type as Prop 2. I would really like to see them both supported, but it seems like Prop 2 is the highest priority. Fortunately GCC allows for different machine types from the same family and compiles according to the users' needs as long as support exists in the "backend." Assuming COG code generation and support for both Propeller types, this would mean 4 machines.
Do we have time for that? Probably not.
@potatohead, Thanks for such great questions. Keep 'em coming if you can.
Just got done reading their docs. They decided to take gcc and bend it to their CPU. I didn't bother to check if they track gcc releases, but I suspect they don't. The scope of things people want to build is probably not worth doing that. avr-gcc then is it's own entity, familiar because it's gcc, but different, because it's targeted specifically for a given CPU.
(which is why I asked the will we commit to the gcc tree, or not question.)
I'm looking for a statement or two here, BTW.
On one hand, if we don't fork gcc, we've got to do some tracking and maintenance that will come with just being a part of that project, and we've got to deal with people off the versions that are released, supported, etc... On the other, if we fork it, we've got prop-gcc, which could then remain fairly static, meaning once it's done, it's done, and stable. That's worth a lot. And, if there were some huge gain in the core gcc project, it could be incorporated, if warranted.
Honestly, if we do gcc, I'm inclined to recommend a fork. Thoughts all?
Re: VM compared to kernel
Yeah, I like VM too. It can be a bit of both really. At the lowest level, it's simply a VM, supervisor, that executes a program residing outside the COG address space. Other services could be provided however, making it kernel like...
That's another decision to be made. One unified target VM / Kernel, or a few, that each have their merits? Perhaps a threaded one, non-threaded one to start? For that matter, provisions for multiples? Ideally, this maps to one of the strengths of the chip. We don't have interrupts, so we absolutely have to have multiple things going on at once.
I submit that needs to be as easy as SPIN + PASM is right now. (Or as close as possible, as it's a key strength, and the enabler for that "lego" style, combining of things we all know and love.)
I think I have arrived at not simply jumping on the gcc train. There are enough things different here to warrant a Propeller tool chain. Look at the trouble Imagecraft went through to bend their set to the Prop and contrast that with Catalina. IMHO, Prop II is roomy enough to treat it like a more standard CPU, but IMHO, that's a mistake. It won't compete at it's peak without a lot of contortions, and we want it to compete, so we've got to deal with those, so they are not contortions.
I also have arrived at the realization that there should be no GUI discussion at this time. We need a robust set of command line tools, and there isn't any consensus on very core decisions. Honestly, I would table this discussion, and start one more fundamental to the new environment, given what we know from Prop I, and given our current expectations on Prop II. The product of that should be a project that builds those tools, according to a spec that we've demonstrated will add value to match the anticipated strengths of Prop II.
By default, it's going to be LMM / XMM environment. There doesn't need to be SPIN at all, we know that too. So, given those things, what does a prop II binary really look like? (and that was discussed above, I'm just affirming that discussion, not contributing anything new, as asked)
From where I stand, the primary advantage of gcc is familiarity, and lots of pieces to build with, along with the potential for more than C support. Whether or not it's really needed is a valid discussion, as is the perception of it being needed. I might suggest that C only aligns more with "the Parallax way" than not, because it would constrain the constructs people would use to a set that can be well matched to the device. Of course, we know how people hate being constrained! But, another very real discussion is about expectations. Set them low, and let them be exceeded over time is a great story. Set them high, and find over time that they are actually considerably lower isn't so great of a story. Gotta hash that out too, and I'm in favor of the former, given what happened on Prop I.
If we don't do gcc, then we've got to build something of our own. Catalina has a lot of battle won lessons to build on! That's not something we should ignore. It works now, and it's actually somewhat potent too. I am not saying, "let's just do what Ross did", but I am saying we need to give it a good, close, look, before either forking gcc, or building something entirely new. The chip is different enough that it's a safe bet that failure to really think things through will waste a considerable amount of time.
@Jazzed: So, there you go.
I have some other questions, largely because I currently cannot see how this would work, but I'm going to sit on them for a bit, curious to see where the discussion goes from here.
I also believe we should not consider Prop I. The tools available for it right now are good tools! Maybe it can be targeted with a sub-set of what happens for Prop II. Development time is significant right now, IMHO. Potentially, this effort is behind the curve, having not even started, if the expectations for Prop II are to be taken seriously.
Edit: Forgot this positioner: I don't think gcc brings anything to the table on it's own, other than recognition and familiarity. It is a platform to do the work needed though, and that's a good thing, but so is Catalina, and of the two, Catalina actually works on Prop I now.
Can we blend these two? That's primary in my mind, because we want the work to be adding value, not doing a retread of things already known the hard way.
(flame suit on, but I was asked --remember, I've no skin in the game, only interested in our greater success)
One question that has not been asked which is obvious to me at least is whether or not multiple threads of execution are possible in one COG. Bill produced a multi-threaded LMM which I think Ross adopted. Ross was kind enough to make his thread library very similar to the GCC pthread library.
potatohead, you are doing a fantastic job of raising the issues that we should have been discussing all along! Keep up the good work!
So as not to derail this thread back onto an argument about the pros and cons of Catalina, I'll just clarify/comment on individual points (like the one above).
Catalina's multithreaded kernel was developed from scratch - I don't recall seeing Bill's version, but I may have done.
Anyway, in my view the kernel must be designed with support for multithreading from "day one". I'd love to claim credit for having the necessary foresight. but in fact it was completely fortuitous that I had just enough space left in my original kernel to add in the necessary minimum support for multithreading once I realized I was going to need it. If this had not been the case, Catalina would not now be able to fully exploit the inherent multiprocessing nature of the Propeller.
The nature of the multithreading support would be an interesting topic - but not having it on a chip which naturally supports multiprocessing would be a bit silly.
Well, maybe my lack of "seasoned" experience in these things is a minor-league benefit. Honestly, I don't see the vision on this thing, nor how it connects to the chip in ways that would be valuable. The need for this kind of effort is clear however.
Didn't you have to add a new XMM mode to support code in C3 flash with data in C3 SRAM and locals/stack in HUB?
A new XMM mode? No - the necessary XMM mode already existed (x5) and I could probably have used it if all I had wanted to do was be able to load programs serially and then execute them from FLASH/SRAM.
But I chose to add two new mode numbers (i.e. x2 ==> x4 and x5 ==> x3) even though the memory layouts are essentially the same because I also wanted to be able to boot programs from FLASH.
One of the key differences between (say) x3 and x5 is that in x5 only the code and cnst segments woud be stored in FLASH (since that's where they will execute from) and the data and init segments will be stored in SRAM - but these segments would then be lost on re-boot. The x3 mode number tells the various loaders to also store copies of the init & data segments in FLASH - so that they can be restored to SRAM whenever the propeller is re-booted.
It is the load and boot processes on the Propeller that adds most of the complexity - especially for XMM programs. The built-in loader on the Propeller knows nothing about XMM and can store and boot programs from Hub RAM and EEPROM only. Catalina now knows about ten different varieties of XMM, and can store and boot programs from Hub RAM, EEPROM, SDCARD, SRAM and (now) FLASH.
You will find you need to deal with all these issues with GCC. Is there a better solution than the one I use? No doubt!
Getting the GCC compiler to generate Propeller binaries is only a part of the story. David Betz asked me recently how long it took to get Catalina working. I quipped something to the effect that it took 2 weeks to get the first code generator working, 2 months to get it optimized enough to be useful, and then 2 years to do all the peripheral work requred to make it usable.
Honestly, if we do gcc, I'm inclined to recommend a fork. Thoughts all?
We used to have this discussion all the time at Cisco and other places. I think it really comes down to maturity of the train in question. Is the train a dog-food train or an experienced release?
Forking a train to do work allows one to do development without interruptions, but it can make merging back pretty difficult. Many nights I have spent resolving train merge conflicts ... avoiding "train wrecks" is a noble cause. I expect that the GCC development team keeps up with changes pretty well though so they can adapt to such problems. This question depends heavily on the judgement of the contractor who is responsible for the port.
As you say though, a decision probably has to be made up front for the code to be merged back.
Perhaps a threaded one, non-threaded one to start? For that matter, provisions for multiples? Ideally, this maps to one of the strengths of the chip. We don't have interrupts, so we absolutely have to have multiple things going on at once.
Proof of concept is in the pudding. If a single threaded port can be done, then a multi-threaded variation can probably be done too assuming one prepares for it.
I submit that needs to be as easy as SPIN + PASM is right now. (Or as close as possible, as it's a key strength, and the enabler for that "lego" style, combining of things we all know and love.)
Easy is fine, but it must be flexible and should not require a tool-chain change for a customer to get something done. That would be a disaster to me.
Corner cases may require some maintenance, but generally the GNU/GCC toolchain brings a set and forget solution to the table for us.
Customers who want/need GCC know what to expect and are well armed to deal with the standard paradigm. Customers who don't know GCC may appreciate other approaches more.
I don't think gcc brings anything to the table on it's own, other than recognition and familiarity.
Familiarity is very important to the class of customers that Parallax Semiconductor must have.
People should visualize one of 2 futures for Parallax in the context of this discussion.
1) A future where Parallax Semiconductor meets the lofty goals that have been set.
2) A future where Parallax Semiconductor fails.
The 2nd future is unacceptable. Parallax Semiconductor's future is far bigger than every single one of us and the sum of our experience. A failure means you will have no choice but go deal with another MCU.
So please consider only what Parallax Semiconductor's success means. Consider a future where the "rising tide" does what it should.
I am absolutely sure for various reasons that Ross will produce a Propeller 2 Catalina regardless of whether a GNU/GCC port includes Propeller 2.
I suppose many will argue things differently again, and again, but I'll just leave it at this:
Perception is what we see tempered by our experiences. We all have different experiences, and thus different perception. Just accept that some believe differently and that we should serve the audience what they expect - the best possible effort.
Comments
Catalina does not include debugging information in the executable. If you use the -g option the debugging information (standard stabs format) for each file is included in a separate file with a .debug extension. For an example (following on from the example in my earlier post) try:
Then examine the resulting my_func.debug file.
Bob Anderson wrote a utility to parse all the individual .debug files and build a consolidated symbol file for use by the BlackCat and BlackBox debuggers, but this format is custom - for other debuggers, you may be better off just using the orignal stabs files. The stabs format (which is compatible with GDB type debuggers) is described here. Of course, you need a debugging kernel to be able to use the symbol information at run-time, but Catalina also provides one of those (which is automatically selected when you use the -g option).
Ross.
No - the Catalina code generator prefaces all source code with appropriate notations to tell the binder which segment the succeeeding statements should go in. These notations are respected by the binder, which shuffles all the source about as required for the selected memory layout. So two source lines that may have been adjacent in the unbound files may end up in different segments in the bound files.
Yup.
Yes - I didn't mean the term 'binder' was original - just how it does what it does. I knew of the Ada usage of the term, and maybe I was influenced by that - but in fact I chose the term because I like the book 'Lord of Light' by Roger Zelazny!
Ross.
Perhaps this is true. The product would still need to be sufficiently defined because of Propeller's differences though. My task is to help Parallax get what it wants that will please ParallaxSemiconductor customers.
We've already been through this. Code space (.text) is a single segment, so there is no decoding required there. Data as you mention is more complex in external memory because of having to choose the right space with a test and jump. The hit however is not significant in the bigger picture.
(This performance hit is actually one reason that I'm working with Flash to provide external code space. There are no such decisions required with local/global/stack variables all in HUB. Other reasons are small footprint solution DIP32 and very low cost).
Sorry I was not precise enough. Common driver libraries such as Serial, TvText, VgaText, filesystem, etc... would use per-platform definitions. For example in Zog the libsrc/ modules do not have any specific platform attributes, but the attributes are defined by the include/ build*.h modules.
During the make process, the build.h is replaced by the platform's build.h. The same approach can be used with a GUI for customers to define the pins on their product. This is a simple example, and some things would need improved to make it more generally usable.
I'm not really sure why you consider this problem. It is no different than allowing a software architect the flexibility of using the toolchain. A toolchain is not responsible for drivers. It is responsible for enabling a system.
I'm sure the platform makers you support appreciate your making it easy for users to deal with their boards, but this is something you decided to do because your a nice guy and wanted market share. Today you have to worry about testing all of it for a release.
If Parallax chooses to support built-in variations of everyone's boards, they will eventually have to hire a full staff of software developers and testers. This will not happen
The current C drivers use PASM chunks with C library wrappers. The latest Zog is more mature than Heater's last version although I would do some things differently. You can look at the latest source here: http://code.google.com/p/propeller-zpu-vm/source/browse
It's fine here. Catalina's LCC and VM with a professional tool chain methodology is a leverageable alternative in the unlikely event that porting GCC is not feasible. In any case, the discourse is good for the current community.
I do however question the decision to allow this thread to live forever in the public eye where ParallaxSemiconductor customers may simply decide that "they're all nuts" and move on to the next option.
Sorry as I must say that
Hide that discussion - In my opinion is Hide from community that PropII and Tools to it are made only for some People that have possibility to do - And community need be happy with what them GET!
I for one would be very happy to see this fascinating debate continue in this thread or perhaps series of threads.
Development of the Linux kernel goes on under public gaze and there have been some "interesting" debates in that space. Apparently without causing the users to decide that all the kernel devs are nuts and moving on to some other OS. So I would not worry too much about that aspect.
Some things will definitely be done in private though because it's not any of our business how some things are done
Sometime in the future when it is all said and done, the thread should probably be removed so that potential paying customers are not "put off" by all the weird and crazy things that come up. As far as I'm concerned we are really not doing Parallax any favors by providing competitive advantage to other companies by exposing architectural weaknesses. Of course if weaknesses could be fixed, then that would be a benefit, but at what cost?
All these judgments are for Parallax, but I guarantee they will not "argue" with you or anyone else about them on the forums.
By that reasoning one would want to delete pretty much the entire Propeller forum:)
It's honest, educational, and there is genuine ambiguity as to how these things are best resolved. Where that's true, debate and some development efforts will add a lot of value. That value can be easily seen by a lot of people. A few might see the discussion and move on, but then again, a few would not see the discussion, wonder why it is the way it is and move on too. There is no strategy that pays off here, so negate it, and focus on the things that do pay off.
That's my take. Remember, the design of the Propeller is actually NEW. Debate of this kind if not only warranted, but necessary, if the NEW design is to be properly boot strapped into a state that can be used by many people, not willing to do the comp-sci work to make that happen. That gets ignored in these discussions far more than I think is healthy.
Re: Supporting boards. IMHO, Parallax isn't going to do it, but those that make the boards will. The idea that can be done with code surrounding that effort working in a platform independent way is a powerful thing, not so easily dismissed. A heirachy of binaries and dependencies, as suggested here, can yield the same result, but at the cost of the end user potentially having to know and manage more information.
Re: Competitive. I do not believe masking any weakness makes longer term sense. To buy that, requires that we also believe that ordinary users, or prospective users aren't willing or able to do the work to vet their choices. That's not realistic. Far better to invest in promoting strengths, as those are the differentiators of value, and thus, the primary means by which we have to promote product selection and adoption, where said selection and adoption has a high chance of seeing success.
eg: Assembly language programs are limited to 500 words!!! This is easily positioned as a core weakness. (and will be)
8 concurrent CPUs, each with hardware virtualization, or supervisory capability permitting multiple assembly language programs to run along side high level ones, concurrently, etc... is a strength that should be put out there in many different forms.
So then, consumer looks at the silly COG statement, then reads the properly positioned strength and loses a bit of respect for whoever was silly enough to demote a nice chip on arguably impotent information.
This happens all the time. Anyone who bites on the silly COG limit, wouldn't select the chip anyway, due to other biases that would prevent adoption in the first place. Easily ignored and factored out. Not a prospect.
ergo: Hiding that stuff does one no good. Promoting the real value propositions does all the good in the world possible.
Just because we have different beliefs doesn't necessarily mean they are wrong. They are just different.
It was not my intention to create some kind of side-show; it was not originally my idea anyway,
I just happen to agree. At this point I certainly feel like the recipient of a sucker punch though.
It is why I from start Proposed advanced Instructions that BUST Space/Speed
Given that, I favor as simple as possible, as robust as possible in the instructions, with LMM execute being a focus, just because that's known and will matter. Bill seems to think a high percentage of native PASM is possible in a LMM style program. That is great. I'll take it.
Too many distractions, or pulls to focus on specific attributes will impact the balance on the silicon, potentially limiting the scope of potential uses. I don't think that's a good idea. Everything costs something, and those costs really won't be known until after this next synthesis, IMHO.
Some instructions need be advanced for speed and Memory bust.
Look on my last PDF in PropII >Blog thread.
And calculate how much clock cycles/memory Yon need for same work - That can be done in one instruction.
(back to the GCC dev tool discussion for me)
I really don't want to contribute to some "but we need this instruction" debate here. There are other threads for that. My primary point was that masking weaknesses isn't really all that productive, not to start some attempt to build away those weaknesses. That's a different discussion.
If you have time, I would really like to see your "logos argument" regarding pros/cons for GCC dev tools. I guess that's asking for a position statement, but in the current discourse having any opinion with good justifications are appreciated.
Just FYI, I have NO skin in this game, in that I currently am happy with SPIN + PASM. I have however used gcc and friends on other kinds of work, and would love to see a environment that competes well with SPIN + PASM. The discussion so far is a REALLY GOOD ONE on all sides, and it must happen, or we cannot properly spec the effort desired by Parallax. That's why I bothered to comment as I did. Edit: I also see where Catalina got to, and it's significant.
Whatever is done here, it's got to work nicely, and leverage the chip properly, and that's my other interest for commenting. I want to use it with Prop II, because the scale of that one will warrant doing so.
I'm going to re-read this thread now. I may have a few questions before positioning...
Ok, lemme think on it.
If so, then that's enough. If not, have we made decisions as to what is GPL, MIT, or closed?
Assuming GCC on Propeller 2 is feasible, the compiler and tool-chain will of course be GPL.
Sources that use LGPL libraries can be MIT or otherwise. I'm pretty sure OpenSource is the direction.
Is Prop I a consideration, or not? Put another way, would we trade optimal functionality in Prop II, for the ability to also build for Prop I?
Edit: Anyone can chime in. I just am thinking through the question Jazzed posed, that's all. Not looking for commits, just data points, or opinion.
I'm asking because dealing with the separate address spaces found in a Prop is going to require potentially significant additions, or modifications to gcc. IMHO, this won't be a trivial exercise.
Secondly, there are a coupla ways this can go: (probably more, but I see these right away)
Native PASM programs limited to the COG, and LMM programs, which require a kernel be built, loaded, etc... which then executes the program itself, which will be built to operate with A kernel, not kernels in general, IMHO. How this is done is a core discussion that would significantly impact the merits of gcc.
This is unlikely. It would be really nice to emit pure PASM for writing and compiling COG code. Bean's PropellerBasic demonstrates how nice the idea is for example.
An LMM type VM similar to ICC and Catalina with enhancements from Bill's research is probably the only way to go - although how far to go is murky at this point. There may be some other options, but that seems the most reasonable. Bill has several posts on LMM enhancements - referring to them by numbers is probably a good idea.
It would be wonderful to have multiple VMs for more threads (I preferred VM over kernel since kernel has more of an O/S connotation ... LMM is a VM like it or not).
One question that has not been asked which is obvious to me at least is whether or not multiple threads of execution are possible in one COG. Bill produced a multi-threaded LMM which I think Ross adopted. Ross was kind enough to make his thread library very similar to the GCC pthread library.
Indeed. This is the biggest issue that has to be resolved.
Prop 1 is a similar machine type as Prop 2. I would really like to see them both supported, but it seems like Prop 2 is the highest priority. Fortunately GCC allows for different machine types from the same family and compiles according to the users' needs as long as support exists in the "backend." Assuming COG code generation and support for both Propeller types, this would mean 4 machines.
Do we have time for that? Probably not.
@potatohead, Thanks for such great questions. Keep 'em coming if you can.
--Steve
Just got done reading their docs. They decided to take gcc and bend it to their CPU. I didn't bother to check if they track gcc releases, but I suspect they don't. The scope of things people want to build is probably not worth doing that. avr-gcc then is it's own entity, familiar because it's gcc, but different, because it's targeted specifically for a given CPU.
(which is why I asked the will we commit to the gcc tree, or not question.)
I'm looking for a statement or two here, BTW.
On one hand, if we don't fork gcc, we've got to do some tracking and maintenance that will come with just being a part of that project, and we've got to deal with people off the versions that are released, supported, etc... On the other, if we fork it, we've got prop-gcc, which could then remain fairly static, meaning once it's done, it's done, and stable. That's worth a lot. And, if there were some huge gain in the core gcc project, it could be incorporated, if warranted.
Honestly, if we do gcc, I'm inclined to recommend a fork. Thoughts all?
Re: VM compared to kernel
Yeah, I like VM too. It can be a bit of both really. At the lowest level, it's simply a VM, supervisor, that executes a program residing outside the COG address space. Other services could be provided however, making it kernel like...
That's another decision to be made. One unified target VM / Kernel, or a few, that each have their merits? Perhaps a threaded one, non-threaded one to start? For that matter, provisions for multiples? Ideally, this maps to one of the strengths of the chip. We don't have interrupts, so we absolutely have to have multiple things going on at once.
I submit that needs to be as easy as SPIN + PASM is right now. (Or as close as possible, as it's a key strength, and the enabler for that "lego" style, combining of things we all know and love.)
I think I have arrived at not simply jumping on the gcc train. There are enough things different here to warrant a Propeller tool chain. Look at the trouble Imagecraft went through to bend their set to the Prop and contrast that with Catalina. IMHO, Prop II is roomy enough to treat it like a more standard CPU, but IMHO, that's a mistake. It won't compete at it's peak without a lot of contortions, and we want it to compete, so we've got to deal with those, so they are not contortions.
I also have arrived at the realization that there should be no GUI discussion at this time. We need a robust set of command line tools, and there isn't any consensus on very core decisions. Honestly, I would table this discussion, and start one more fundamental to the new environment, given what we know from Prop I, and given our current expectations on Prop II. The product of that should be a project that builds those tools, according to a spec that we've demonstrated will add value to match the anticipated strengths of Prop II.
By default, it's going to be LMM / XMM environment. There doesn't need to be SPIN at all, we know that too. So, given those things, what does a prop II binary really look like? (and that was discussed above, I'm just affirming that discussion, not contributing anything new, as asked)
From where I stand, the primary advantage of gcc is familiarity, and lots of pieces to build with, along with the potential for more than C support. Whether or not it's really needed is a valid discussion, as is the perception of it being needed. I might suggest that C only aligns more with "the Parallax way" than not, because it would constrain the constructs people would use to a set that can be well matched to the device. Of course, we know how people hate being constrained! But, another very real discussion is about expectations. Set them low, and let them be exceeded over time is a great story. Set them high, and find over time that they are actually considerably lower isn't so great of a story. Gotta hash that out too, and I'm in favor of the former, given what happened on Prop I.
If we don't do gcc, then we've got to build something of our own. Catalina has a lot of battle won lessons to build on! That's not something we should ignore. It works now, and it's actually somewhat potent too. I am not saying, "let's just do what Ross did", but I am saying we need to give it a good, close, look, before either forking gcc, or building something entirely new. The chip is different enough that it's a safe bet that failure to really think things through will waste a considerable amount of time.
@Jazzed: So, there you go.
I have some other questions, largely because I currently cannot see how this would work, but I'm going to sit on them for a bit, curious to see where the discussion goes from here.
I also believe we should not consider Prop I. The tools available for it right now are good tools! Maybe it can be targeted with a sub-set of what happens for Prop II. Development time is significant right now, IMHO. Potentially, this effort is behind the curve, having not even started, if the expectations for Prop II are to be taken seriously.
Edit: Forgot this positioner: I don't think gcc brings anything to the table on it's own, other than recognition and familiarity. It is a platform to do the work needed though, and that's a good thing, but so is Catalina, and of the two, Catalina actually works on Prop I now.
Can we blend these two? That's primary in my mind, because we want the work to be adding value, not doing a retread of things already known the hard way.
(flame suit on, but I was asked --remember, I've no skin in the game, only interested in our greater success)
potatohead, you are doing a fantastic job of raising the issues that we should have been discussing all along! Keep up the good work!
So as not to derail this thread back onto an argument about the pros and cons of Catalina, I'll just clarify/comment on individual points (like the one above).
Catalina's multithreaded kernel was developed from scratch - I don't recall seeing Bill's version, but I may have done.
Anyway, in my view the kernel must be designed with support for multithreading from "day one". I'd love to claim credit for having the necessary foresight. but in fact it was completely fortuitous that I had just enough space left in my original kernel to add in the necessary minimum support for multithreading once I realized I was going to need it. If this had not been the case, Catalina would not now be able to fully exploit the inherent multiprocessing nature of the Propeller.
The nature of the multithreading support would be an interesting topic - but not having it on a chip which naturally supports multiprocessing would be a bit silly.
Ross.
Well, maybe my lack of "seasoned" experience in these things is a minor-league benefit. Honestly, I don't see the vision on this thing, nor how it connects to the chip in ways that would be valuable. The need for this kind of effort is clear however.
But I chose to add two new mode numbers (i.e. x2 ==> x4 and x5 ==> x3) even though the memory layouts are essentially the same because I also wanted to be able to boot programs from FLASH.
One of the key differences between (say) x3 and x5 is that in x5 only the code and cnst segments woud be stored in FLASH (since that's where they will execute from) and the data and init segments will be stored in SRAM - but these segments would then be lost on re-boot. The x3 mode number tells the various loaders to also store copies of the init & data segments in FLASH - so that they can be restored to SRAM whenever the propeller is re-booted.
It is the load and boot processes on the Propeller that adds most of the complexity - especially for XMM programs. The built-in loader on the Propeller knows nothing about XMM and can store and boot programs from Hub RAM and EEPROM only. Catalina now knows about ten different varieties of XMM, and can store and boot programs from Hub RAM, EEPROM, SDCARD, SRAM and (now) FLASH.
You will find you need to deal with all these issues with GCC. Is there a better solution than the one I use? No doubt!
Getting the GCC compiler to generate Propeller binaries is only a part of the story. David Betz asked me recently how long it took to get Catalina working. I quipped something to the effect that it took 2 weeks to get the first code generator working, 2 months to get it optimized enough to be useful, and then 2 years to do all the peripheral work requred to make it usable.
Ross.
Forking a train to do work allows one to do development without interruptions, but it can make merging back pretty difficult. Many nights I have spent resolving train merge conflicts ... avoiding "train wrecks" is a noble cause. I expect that the GCC development team keeps up with changes pretty well though so they can adapt to such problems. This question depends heavily on the judgement of the contractor who is responsible for the port.
As you say though, a decision probably has to be made up front for the code to be merged back.
Proof of concept is in the pudding. If a single threaded port can be done, then a multi-threaded variation can probably be done too assuming one prepares for it.
Easy is fine, but it must be flexible and should not require a tool-chain change for a customer to get something done. That would be a disaster to me.
Corner cases may require some maintenance, but generally the GNU/GCC toolchain brings a set and forget solution to the table for us.
Customers who want/need GCC know what to expect and are well armed to deal with the standard paradigm. Customers who don't know GCC may appreciate other approaches more.
Familiarity is very important to the class of customers that Parallax Semiconductor must have.
People should visualize one of 2 futures for Parallax in the context of this discussion.
1) A future where Parallax Semiconductor meets the lofty goals that have been set.
2) A future where Parallax Semiconductor fails.
The 2nd future is unacceptable. Parallax Semiconductor's future is far bigger than every single one of us and the sum of our experience. A failure means you will have no choice but go deal with another MCU.
So please consider only what Parallax Semiconductor's success means. Consider a future where the "rising tide" does what it should.
I am absolutely sure for various reasons that Ross will produce a Propeller 2 Catalina regardless of whether a GNU/GCC port includes Propeller 2.
I suppose many will argue things differently again, and again, but I'll just leave it at this:
Perception is what we see tempered by our experiences. We all have different experiences, and thus different perception. Just accept that some believe differently and that we should serve the audience what they expect - the best possible effort.
Thank you sir. Honored that you could contribute so wonderfully. Everyone should appreciate it.