Shop OBEX P1 Docs P2 Docs Learn Events
Question: Propeller architecture in ? family? - Page 3 — Parallax Forums

Question: Propeller architecture in ? family?

135

Comments

  • David BetzDavid Betz Posts: 14,516
    edited 2014-08-12 15:33
    @Heater
    http://www.linusakesson.net/scene/turbulence/
    That is my favorite graphics demo for the propeller. I can never draw anything that fast on the propeller (and yes, I use assembly), but it's probably because I turn the detail level up way too high. I'm jealous of how much faster his mandelbrot renderer in turbulence is than mine, I should probably look at the source to see how he did it.
    Yes, this demo is amazing and from what Heater says it uses something like what I was proposing for avoiding polling loops. I'm going to have to take a look at the code!
  • cavelambcavelamb Posts: 720
    edited 2014-08-12 15:34
    Perhaps I misunderstood...when the Z80 executes a HALT all processing stops. This is not so on the Propeller chip. When a cog is waiting for a pin state (including WAITPNE/WAITPEQ) the other cogs continue to perform their tasks uninterrupted by the cog, even when the pin state has triggered the cog to take some action. The same cannot be said for the Z80 and mind you, I spent 10 years doing Z80 development including using vectored interrupts. My NMI locked into the 60 Hz from the AC line (borrowed idea from Commodore).

    Sure, Chris, but I as thinking of the Z80 as a cog.
    Imagine eight Z80s and a quarter meg of hub space!

    And I really have to agree with Heater about the interrupt discussion.
    It's really a non-issue.
    This is the way Von-Gracey architecture works - and it works well.

    My ONLY complaint about the Propeller is the tiny memory space.

    I understand that the underlying implementation technology has difficulty scaling up.
    So I don't expect to see a 256 cog 1024 pin device anytime soon.
    But the P8x32, while somewhat divergent from old-school architecture, is perfectly valid.

    Richard

    Heater, check PM...
  • Chris SavageChris Savage Parallax Engineering Posts: 14,406
    edited 2014-08-12 16:15
    David Betz wrote: »
    The original post was about Propeller architecture so I don't the discussion is too off topic. Also, the original poster brought up the question of interrupts so I don't think that is off topic either. It seems perfectly valid to me to discuss how someone familiar with a more traditional architecture would do similar things on the Propeller and also how those things could be done in a Propeller-like fashion rather than just trying to emulate an incompatible architecture.

    Off topic or not, once it becomes argumentative it stops being informational and useful. I was attempting to prevent that, so useful discussion can prevail.
  • Heater.Heater. Posts: 21,230
    edited 2014-08-12 16:20
    cavelamb,
    Von-Gracey architecture

    Brilliant, I love it!

    You have cheered up my whole day with that.
  • Chris SavageChris Savage Parallax Engineering Posts: 14,406
    edited 2014-08-12 16:22
    cavelamb wrote: »
    This is the way Von-Gracey architecture works - and it works well.

    We're going to have to see about making that official. :thumb:
  • kwinnkwinn Posts: 8,697
    edited 2014-08-12 16:45
    Perhaps most of the disagreements and misunderstandings are due to the limitations of earlier architectures that the prop no longer has.

    Single core processors needed to 'interrupt' the main process to perform I/O and other tasks. The Propeller does not need to interrupt anything since it can assign a cog to those functions.
  • richaj45richaj45 Posts: 179
    edited 2014-08-12 17:04
    Hello:

    On the issue of interrupts.
    I have noticed that in the async serial drivers, UARTs, that do both transmit and receive with the same asm code, are skilfully handcrafted so receive bits are not missed whilst transmit bits are going out. Actually i call the code convoluted since it does not want to miss the falling edge of a start bit but must keep up the timing of the transmit bits ans the follow on receive bits. Now i am talking about the UART code that is designed for the lowest jitter.

    This type of design goes against one task per COG but then since there is a limited number of COG it is understandable.
    Is is also a place were interrupts makes sense given two independent task are being done on one processor, a COG.

    cheers,
    rich
  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2014-08-12 18:30
    richaj45 wrote: »
    Hello:

    On the issue of interrupts.
    I have noticed that in the async serial drivers, UARTs, that do both transmit and receive with the same asm code, are skilfully handcrafted so receive bits are not missed whilst transmit bits are going out. Actually i call the code convoluted since it does not want to miss the falling edge of a start bit but must keep up the timing of the transmit bits ans the follow on receive bits. Now i am talking about the UART code that is designed for the lowest jitter.

    This type of design goes against one task per COG but then since there is a limited number of COG it is understandable.
    Is is also a place were interrupts makes sense given two independent task are being done on one processor, a COG.

    cheers,
    rich

    Given the increase in complexity in not just accepting an interrupt but saving and restoring status and register along with determining priorities etc it is much more preferable to stick to the Von Gracey architecture (which means a cog is not a CPU per se, but the cogs in unison constitute the "CPU") and give it a cog. When Chip was first talking about doing a super P1 in place of the 5WP2 he said that he could fit 128 cogs in the same silicon as the 5WP2. That's a lot of "smart interrupt handlers". Of course the new P2 will have 16 cogs so that helps me to dedicate cogs for high-speed low-jitter resources without the complexity of setting up and handling interrupts and DMA that I have on other processors.
  • David BetzDavid Betz Posts: 14,516
    edited 2014-08-12 18:45
    Off topic or not, once it becomes argumentative it stops being informational and useful. I was attempting to prevent that, so useful discussion can prevail.
    I don't see anything wrong with a friendly argument. Personal attacks are another story though.
  • David BetzDavid Betz Posts: 14,516
    edited 2014-08-12 19:10
    Given the increase in complexity in not just accepting an interrupt but saving and restoring status and register along with determining priorities etc it is much more preferable to stick to the Von Gracey architecture (which means a cog is not a CPU per se, but the cogs in unison constitute the "CPU") and give it a cog. When Chip was first talking about doing a super P1 in place of the 5WP2 he said that he could fit 128 cogs in the same silicon as the 5WP2. That's a lot of "smart interrupt handlers". Of course the new P2 will have 16 cogs so that helps me to dedicate cogs for high-speed low-jitter resources without the complexity of setting up and handling interrupts and DMA that I have on other processors.
    Given the Von Gracey architecture, how does the main code interact with its "smart interrupt handlers"? How does it know when there is data available to process without having to constantly poll potentially many hub variables?
  • cavelambcavelamb Posts: 720
    edited 2014-08-12 19:35
    David Betz wrote: »
    Given the Von Gracey architecture, how does the main code interact with its "smart interrupt handlers"? How does it know when there is data available to process without having to constantly poll potentially many hub variables?

    Probably the same way other systems do?
    The interrupt handler gets chars and sticks them in a circular buffer.
    Maybe sets a flag that a char is in, or the "un-used" character count in the buffer.

    A routine that the main program can call to get next character, updated buffer pointers, counts, etc
    and returns the next character from the buffer.

    Set (ore get?) baud rate for serial ports, etc

    Same as for keyboard handlers.
  • David BetzDavid Betz Posts: 14,516
    edited 2014-08-12 19:48
    cavelamb wrote: »
    Probably the same way other systems do?
    The interrupt handler gets chars and sticks them in a circular buffer.
    Maybe sets a flag that a char is in, or the "un-used" character count in the buffer.

    A routine that the main program can call to get next character, updated buffer pointers, counts, etc
    and returns the next character from the buffer.

    Set (ore get?) baud rate for serial ports, etc

    Same as for keyboard handlers.
    Ummm... I was going to reply with a flaw in this approach but somehow I've lost my train of thought and my argument seems to have fallen apart. It may be that you're right. In any case, if I'm going to continue to support the idea that interrupts should be available as an option I'm going to have to think through my example better. One thing that I do think is a slight failing in the model you propose is that this "flag" needs to be checked continually and that means that the main COG can never go into low power mode. I think this can be solved in better ways than introducing interrupts though. By the way, I agree that interrupts introduce extra complexity and uncertainty in the system and I am a big supporter of the idea of soft peripherals. My old company VM Labs had this idea back in 1995 when we tried to create a DVD player decoder chip with audio and video decoders implemented in general purpose processors that could be retargeted at other applications when decode wasn't needed. In fact, we ended up with some of the same tradeoffs as the Propeller where we ended up having to put some of the logic in hardware because the software wasn't fast enough. We tried to do that as little as possible though and to have most of the higher level logic in software. However, we did have interrupts partly because we only had four processors not eight like the Propeller. Anyway, I'd like to think that interrupts are entirely unnecessary. Maybe they are and I just haven't thought about it enough to understand how to structure a system without them. My current theory is that you could manage a single main COG and a bunch of peripheral COGs with a single hardware latch that the peripheral COGs could set and that the main COG could wait on to detect the occurance of an event and clear once an event is noticed. The waiting would be done with an instruction like waitpeq so the main COG could be in low power mode until an event occurs. The rest of the COGs would likely also be in low power mode waiting for pin transitions. I don't like this architecture much though since it assumes only a single "main COG" that is using the others as peripherals. I'd rather see more symmetry but I don't see how to do that without having one of these latches per COG. Anyway, I'm sure many of you here could come up with better solutions and I'm looking forward to seeing them. Thanks for an interesting discussion and I'm going to concede for the moment.
  • cavelambcavelamb Posts: 720
    edited 2014-08-12 20:04
    David Betz wrote: »
    Ummm... One thing that I do think is a slight failing in the model you propose is that this "flag" needs to be checked continually and that means that the main COG can never go into low power mode. I think this can be solved in better ways than introducing interrupts though.

    If there is a spare pin, run a flag up it and see who salutes...
    Then main() is waiting on a pin - and low power happens.
    Would that work with one of the imaginary pins (above 31)?
  • David BetzDavid Betz Posts: 14,516
    edited 2014-08-12 20:08
    cavelamb wrote: »
    If there is a spare pin, run a flag up it and see who salutes...
    Then main() is waiting on a pin - and low power happens.
    Would that work with one of the imaginary pins (above 31)?
    I mentioned this in an earlier post. I don't think it would work on pins 32-63 since as far as I now they don't exist at all even internally on the P1. Also, I wasn't able to come up with a protocol for using a pin as a flag. It seems to me that it would need to be possible for any of the "peripheral" COGs to set the pin and then for the "main" COG to clear it when it returns for the waitpeq instruction. How do you do that? If a COG is forcing a pin high then there is no way for another COG to force it low. My only idea involved using three pins, one for the flag itself, one used by the peripheral COGs to set the flag, and one used by the main COG to clear it again. There has to be a better way though. Anyone have any ideas?

    Edit: Oh and I forgot the worst feature of my three pin scheme, you'd need an external hardware latch. Ugh.
  • cavelambcavelamb Posts: 720
    edited 2014-08-12 20:36
    David Betz wrote: »
    I mentioned this in an earlier post. I don't think it would work on pins 32-63 since as far as I now they don't exist at all even internally on the P1. Also, I wasn't able to come up with a protocol for using a pin as a flag. It seems to me that it would need to be possible for any of the "peripheral" COGs to set the pin and then for the "main" COG to clear it when it returns for the waitpeq instruction. How do you do that? If a COG is forcing a pin high then there is no way for another COG to force it low.


    My only idea involved using three pins, one for the flag itself, one used by the peripheral COGs to set the flag, and one used by the main COG to clear it again. There has to be a better way though. Anyone have any ideas?


    That might could clear the external flag, but the cog that set it would still have it asserted.


    Edit: Oh and I forgot the worst feature of my three pin scheme, you'd need an external hardware latch. Ugh.

    Yeah... I was kinda thinking that the service routine could handle clearing the pin, but if the cog is stopped... :(
    I've never been able to share a pin between cogs.
    Might need to talk to Von about that for later designs?

    Also, pardon me please for calling it an interrupt handler (post #72).
    Just habit.
    It would be more accurately called a "device handler", wouldn't it.
    (Even if it does use interrupts )
  • David BetzDavid Betz Posts: 14,516
    edited 2014-08-12 20:46
    cavelamb wrote: »
    Yeah... I was kinda thinking that the service routine could handle clearing the pin, but if the cog is stopped... :(
    I've never been able to share a pin between cogs.
    Might need to talk to Von about that for later designs?

    Also, pardon me please for calling it an interrupt handler (post #72).
    Just habit.
    It would be more accurately called a "device handler", wouldn't it.
    (Even if it does use interrupts )
    I was trying to call this sort of thing an "event handler".
  • cavelambcavelamb Posts: 720
    edited 2014-08-12 20:56
    David Betz wrote: »
    I was trying to call this sort of thing an "event handler".

    Technically that's what it is, of course.
    But that phrase seems to be pretty tightly wed to the Object Oriented Programming Systems
    (OOPS? who thought of that) like Visual Basic and Visual C.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-08-12 22:10
    ksltd wrote:
    While there are many approaches to emulating or simulating interruptions, the fundamentals of prioritized, asynchronous, context-preserving transfers of execution control and facilities for restoring context and returning to the point of interruption are simply not present within the architecture.

    ... as if that's a bad thing.

    It's rare in history for for an emulator to predate the thing it's supposed to be emulating. But that's precisely the case with interrupts. Interrupts are a dodge -- a kludge, if you will -- scabbed onto single-core processors to let them pretend to be multiple-core processors. Over the years and absent any significant multi-core challenge, they've taken on a aura of indispensibility, as if they're better than the very thing they pretend to be. It's no wonder, then, that their entrenched champions take great umbrage when the real thing -- a.k.a. the Propeller chip -- comes along, threatenting to unseat the pretenders that interrupts really are.

    This forum-worn argument takes place nealry every six months. The same points get raised and rehashed over and over again. There's nothing in this thread regarding interrupts that's new -- even this post. But I guess it's important that the subject receive regular booster shots to ward off the infectious belief that interrupts are necessary. They are not. At least not in the Propeller's case.

    -Phil
  • Heater.Heater. Posts: 21,230
    edited 2014-08-12 23:25
    Well said Phil.

    I may have said this here multiple times already but...

    No body actually wants interrupts. Not as an end them themselves. Who would want all that analysing timing interactions, messing with priorities, context preservation etc etc.? Who would want to have to re-analyse all that when they add a new asynchronous function that may well have a bad impact on the timing of what is there already?

    No. What they want is a means to get bits of their code executed in response to some asynchronous event. Often with stringent latency bounds.

    Interrupts, like Phil says are that cheap kludge grafted on to processors to make what we really want happen to at least some degree of effectiveness. They were probably the only practical way to do it when processors were huge and expensive, starting with those old vacuum tube monsters. They come with the cost of making your software impossible to reason about and fragile.

    Given the enormous transistor budgets available today we should be looking for better ways to get what we want. The Propeller and the XMOS devices are exploring those better ways.
  • ErNaErNa Posts: 1,752
    edited 2014-08-13 04:33
    Such discussion can run endlessly, as long as the sides do not see, that every implemented solution is a compromise. So both sides are right, to a certain extend. When in the Course of human events, it becomes necessary for one people to change mind, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to do so. Not the worst reason is to gain knowledge. When I created a floppy disk driver for CP/M, I switched from polling to interrupt as soon as the floppy disk controller created one. After the data rate was doubled, latency of interrupt became to high so I switched back to polling, what on the other hand was not a problem, as I send a command to the controller and didn't have anything else to do until the data started to stream. Whenever any principle becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new principles that create a cheaper solution. But when a long train of heroic efforts and frustration, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such architectures, and to provide new parallel COGs for their future prosperity. If there are resources, and they are for free, nobody should hesitate to make good use of them. And should not care about effective usage if he reaches his goal in a simple manner. This is valid for example for the synapses in brain, for cogs in propellers, for love and understanding. There is one exception: nobody should make use of the resources which are related to the term "stupidity", which according to A.Einstein are infinite. ;-)
  • Heater.Heater. Posts: 21,230
    edited 2014-08-13 05:21
    ErNa,

    Wow! I love it.

    "...it is their duty, to throw off such architectures, and to provide new parallel COGs for their future prosperity."

    Chip, are you listening? It's your duty to provide new parallel COGs!

    :)
  • mklrobomklrobo Posts: 420
    edited 2014-08-13 07:28
    ksltd wrote: »
    Case in point - Interrupt and Von Neumann architecture are both formal terms that are well defined and well understood in the science of computer architecture. Their definitions are not the subject of debate and the tests for whether a given architecture is Von Neumann or supports Interrupts is trivial.

    The architecture of the processor cores within the Propeller SOC is Von Neumann - it provides for a stored program that is co-resident with the data store; instructions and data reside within a homogenous store with uniform addressing, there is a program counter, a facility for IO and an external mass storage.

    The architecture of the processor cores within the Propeller SOC does not support Interrupts. Period. While there are many approaches to emulating or simulating interruptions, the fundamentals of prioritized, asynchronous, context-preserving transfers of execution control and facilities for restoring context and returning to the point of interruption are simply not present within the architecture.

    This lack of formality, while seemingly trivial in many cases, often makes detailed discussion close to impossible because no one actually knows what anyone means when they talk about Propeller. While the congeniality and colloquialisms often seem "nice", they actually do a huge disservice to anyone actually trying to precisely understand, discuss or describe the operation of the device. And the key thing about processors is that they are inordinately (absurdly) precise.

    That the manual is devoid of formal terms such as Fetch, Execute and Retire and fails to differentiate between Operands and Accesses makes it impossible to precisely understand. Similarly, the Spin language documentation is devoid of formal terms such as Definition, Activation, Formal Parameter, Actual Parameter, Local, Static, Global and many, many more of the terms used to describe the syntax, semantics, composition and execution of software.

    Is it good for someone building commercial product? Absolutely, unequivocally not.[/QUOTE]

    Very clever observation. I have reviewed the forum's threads, and found what you are speaking of, is true. I do not have enough expertise in the propeller(yet), and thought
    that I was in error, for lack of experience with the propeller, in reference to when people spoke about colloquialisims. I thought that maybe this was "in house" jargon,
    and so, I would wait paitently until I understood.:innocent:

    Insofaras the commercial product version, Peter J. has produced commercial products with the propeller; BUT, I think he adapted a Heuristic he invented, and which evolved
    from Forth, which has the ability to overcome and surpass an expected technical expectation, as you have pointed out.(can not argue with results)

    What you have pointed out, is exactly what I was afraid of; there must be some organization that you have pointed out, but maybe not in the way one would expect from the
    conventional point of veiw.

    Since Peter J. has exceeded expectations with Tachyon, the logical path to create the ultimate propeller (my opinion) is to adapt his heuristic, and solidify the
    code into a standard, that everyone can use. Comments?
  • ErNaErNa Posts: 1,752
    edited 2014-08-13 07:54
    mklrobo wrote: »
    ksltd wrote: »

    because no one actually knows what anyone means when they talk ]
    This is the weak point! When MS-DOS arrived, a call to an entry point was called "soft interrupt". So generations of software engineers (engineers: i beg your pardon, but I didn't coin this word) gained a special understanding of what an interrupt is. Read Descartes: I dreamed to dream, so could it be that "reality" is just a dream. For those that do not like to read: what about watching "MATRIX". What I want to express: it doesn't matter if there is parallelism as long as there is no way to find out the "truth". If something behaves like parallel, it is. So: in a perfect system, where real time condition is always met, a interrupt driven solution is equal to a parallel solution and there is no way to find out by observation, what is real. In this case the machine must have more resources than needed, as the interrupt can happen at any time. One of the reasons to develop the transputer was flight control for the Tornado jet fighter. Occam as a parallel language runs any code at any time, unpredictable. So there is no real time memory allocation and it can not happen, that during program run memory is missed. And two routines will never use the same physical memory. Data corruption is excluded this way.

    Again: lets focus on important things. The propeller is the only self contained processor family. Go to work and stop convincing the unconvinceable. There is only one solution: learning by doing. By the way: Every individual need the same time to gain understanding, but to some you have to explain it more often!
  • Heater.Heater. Posts: 21,230
    edited 2014-08-13 08:05
    We already have standards that everyone can use. Many of them:)

    Formally specified or not the Spin language is a standard in the Propeller world. There are at least four Spin compilers that accurately compile Spin and have been very well tested. It is in the nature of standards to document common industry practice so there you are. Just write a formal language specification for Spin and submit it to ECMA or the standards body of your choice. I'm sure the community would be very grateful for such an effort.

    Then we have C. There are at least three standards compliant C compilers for the Propeller. You can't get much more standard than that.

    There are of course a ton of other languages that have been created for the Propeller. Notably a few Forth dialects and and few BASIC dialects. I don't know so much about them but their creators don't seem to be much into any kind of standardization. Those free thinkers like to do what they like to do. Good for them.

    As of the 6th of August nobody has a leg to stand on when complaining about the lack of or quality of documentation for the the Propeller. The internal workings have been laid out for all to see as an Open Source release. It's all been offered for the taking, for free.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-13 08:11
    Code as documentation.
  • Heater.Heater. Posts: 21,230
    edited 2014-08-13 08:20
    I don't say "code as documentation".

    We have always had documentation. It's perfectly readable. Mostly accurate. Good enough for what 99% of Propeller users need. As far as I can tell anyone who is really curious about some edge case in that remaining 1% can ask Parallax or here and the answer is soon forthcoming. So far that is on a par with my experience of documentation for many devices from those big names we all know and love.

    Now we have the source code of the P1. Well heck. Now you can look at some last remaining 0.1% omissions in the documentation and figure it out from the code.

    So I say. No one has a leg to stand on any more when complaining about Propeller documentation.

    By the way, if such a busy person happens to find the Open Source code does not do what the Prop actually does that is a bug and they should report it.
  • Chris SavageChris Savage Parallax Engineering Posts: 14,406
    edited 2014-08-13 08:22
    ... as if that's a bad thing.

    It's rare in history for for an emulator to predate the thing it's supposed to be emulating. But that's precisely the case with interrupts. Interrupts are a dodge -- a kludge, if you will -- scabbed onto single-core processors to let them pretend to be multiple-core processors. Over the years and absent any significant multi-core challenge, they've taken on a aura of indispensibility, as if they're better than the very thing they pretend to be. It's no wonder, then, that their entrenched champions take great umbrage when the real thing -- a.k.a. the Propeller chip -- comes along, threatenting to unseat the pretenders that interrupts really are.

    This forum-worn argument takes place nealry every six months. The same points get raised and rehashed over and over again. There's nothing in this thread regarding interrupts that's new -- even this post. But I guess it's important that the subject receive regular booster shots to ward off the infectious belief that interrupts are necessary. They are not. At least not in the Propeller's case.

    -Phil

    Couldn't have (and obviously didn't) said it better myself. My sentiment, exactly! :nerd:
  • potatoheadpotatohead Posts: 10,261
    edited 2014-08-13 08:55
    Well, I do say it. Code as documentation.

    Obviously, we don't want to just provide or have code only. That's not very accessible. It is very desirable to have code in tandem with documentation, as both together can provide levels of detail mere non-executable documentation would be very expensive to deliver alone.

    An example from 8 bit times. I owned and used three 8 bit computers. Apple, Atari, Tandy Color Computer. The Apple, was completely open. It shipped with schematics, standard documentation we are discussing here, and a full ROM listing. If you wanted to know something about that machine, you studied the information, including code, and then perhaps filled your own knowledge gaps as needed elsewhere, and you knew that thing. Now, there were timings and such that did end up needing additional treatment, but one did know where and how to look no matter what. The question was one of equipment, more than it was anything else. I did end up pointing my Tek scope at that Apple a time or two. Poked at the Atari and it's outputs a lot of times. "What the heck is it doing?"

    The Atari machine shipped with a nice user manual, and little else. Heck, it didn't ship with BASIC. That cost extra. The thing took years to get documented, and even then, some behavior was never known until long after the machine became irrelevant to the vast majority of people. Notably, there was no code documentation at all. One of the first things I did on mine was type in a little disassembler from COMPUTE magazine, point it at the ROM and start parsing. The documentation most people had was code other people had written that worked... somehow. Books filled in the gaps. Today, we've cut the chips open, scanned them, and we also have schematics of most of them. Now it's documented, I suppose, but we still don't know all the behavior. Mostly because nobody prodded at the circuit in every way possible. How much do we need to get going?

    The Tandy was nearly as open as the Apple. I do not recall code documentation, but there was a schematic, etc... And one could get the datasheets on the chips identified on the schematics too. Here's something awesome: So I got a ride up to a local Motorola office and asked. They gave me all of it, CPU, peripherial chips, video chip, etc... Nice guys. I still have my 6809 programmers manual from the Moto people. That was great and inspiring gift. "Really kid? Ok, here you go man! Go for it!"

    When we first got the Propeller, we got some code, commented code, and a rough document. Something like Propeller Guts. Then we got the Manual. And then we got a datasheet. The video sub-system and the counters were mostly code documented at first, and then we got the counter Application Note, and the data sheet gave the video sub-system more detail. A call for "more documents" was made about the video system, and that discussion went on for some time. We ended up inferring what it did, tested those inferences, and the documentation today on those very detailed bits is code, and some text comments.

    There isn't much about the Propeller that isn't documented, but it took a while. No worries, because we did have code early on, and that got people willing to read code, going nicely enough. Having read that code, they could test a few things and get the behavior information they needed too.

    Now we've essentially released the code to the chip itself. That's an extremely technical treatment, and IMHO, serves very nicely as documentation, to which somebody may well author some detail text, just as we saw before.

    Notably, some of the calls for "more documentation" came about as understanding color TV video isn't as complete nor common as we would like. How the chip did things would be fairly obvious to somebody skilled in that area, testable too. For people lacking that information, it would seem documentation was poor, when the truth is that aspect of things is well documented elsewhere. A similar thing happened long ago on the Apple as the information unique to that computer was provided, and it was a learning exercise for the user, who may or may not have the core understanding needed to make sense of otherwise complete documentation. (Given we were kids, this made sense, and the library cleared up most things)

    Which leaves odd behavior? It was noted in another thread how difficult it would be to write a full test of the Propeller to assure functional compliance between VERILOG simulations of it. Meaning there is the intended behavior, which needs documentation, and there is unintended behavior, which may be documented as it's found later on too. The Propeller saw a little of this with the video system, and how the COGS work internally. Audio applications may output distorted audio because of layout differences inside the Propeller, for example. That was not intended, nor something economically worth documenting to a degree where it would be discovered prior to people needing to understand it.

    Anyway, that's what "code as documentation" means to me. It's frequently "the" documentation for many OSS projects, which will include functional docs, intended behaviors, use cases maybe. But if you really want to know what it does and how it does it, you go and read the code. Seems to me, this has always been viable and reasonable, and going forward, will continue to be viable and reasonable.

    Maybe we aren't far apart on this Heater. I'm really wanting to highlight code as one aspect of documentation available to us. It's important.
  • Heater.Heater. Posts: 21,230
    edited 2014-08-13 09:17
    potatohead,

    Sounds like we are in agreement.

    I perhaps did not put over my point very well though. The P1 is an Open Source project now. It has a Free software license (Note the capitalization there).

    As such it is now down to the "community" to fill in any perceived gaps in the documentation. If they want to. Be part of the community, pitch in, or keep schtum.

    For people to complain about it now would be just rude. They have already been given everything. Far more than they will get from all the other MCU manufacturers out there put together. It's ungrateful.
  • David BetzDavid Betz Posts: 14,516
    edited 2014-08-13 09:39
    Heater. wrote: »
    potatohead,

    Sounds like we are in agreement.

    I perhaps did not put over my point very well though. The P1 is an Open Source project now. It has a Free software license (Note the capitalization there).

    A such it is now down to the "community" to fill in any perceived gaps in the documentation. If they want to. Be part of the community, pitch in, or keep schtum.

    For people to complain about it now would be just rude. They have already been given everything. Far more than they will get from all the other MCU manufacturers out there put together. It's ungrateful.
    Maybe we should ask Parallax to post the source for the P1 manual so it can be updated and enhanced by the community?
Sign In or Register to comment.