With interrupts you also get some extra overhead in that the CPU (the interrupt handler code) has to stash away the current register content to memory somewhere, before it can handle the interrupt itself. When it's finished it'll have to restore those registers from memory again. And call a return-from-interrupt instruction. This adds a lot of overhead compared to just hanging on a pin until something happens (the Propeller way) and off you go. No overhead.
(A few CPUs have shadow registers, or several register sets, or a slightly slower variant - a register file, all to avoid having to save/restore registers to memory. But there's still overhead even there.)
The 65C02 CPU actually has an extra feature, the WAI instruction, which almost works as a single-pin version of the Propeller way: It can hang on a pin waiting for something to happen (sleeping all the while). If you save your registers in advance you can go straight on to execution, no overhead. Note that the reference calls it "ultra-fast interrupt service". Well, that's what you have on the Propeller all the time..
The Propeller has set that standard for microcontrollers to move past the archaic system of being interrupted by tasks to perform other tasks and frees you simply run your code, without interruption and not have to worry about how many CPU cycles concurrent tasks are taking.
The trouble with conventional interrupts is that they....well....interrupt whatever is happening, and it could be important or critical or just plain inconvenient but the interrupt is important too. However the conventional CPU has to stop what it's doing, save any important status flags and it's PC and other registers maybe etc before vectoring off in the new direction. All that overhead each and every time.
With the Propeller we can't actually interrupt whatever a cog is doing although a spare cog can be sitting in reserve just waiting for that important thing that needs to be done right away, So the spare cog that has been setup to respond to that important signal is like a personal butler ready to do whatever it needs to do without having to drop something else and run over. This is so much better than having a single core and having to chop and change all the time. The end result is the same, the signal gets looked after, but the Propeller offers that personal service via each signals own private cog.
Also conventional CPUs have a rich peripheral set such as UARTS, I2C, SPI, ADC, TIMERS etc which are typically serviced by the interrupts which will grab and process and store the data or whatever. Not so the Propeller as it doesn't have these peripherals really but that private cog doesn't just sit around twiddling it's thumbs, it emulates the UART for instance and also processes the data and stores it etc and since it did then there is no actual need to request an "interrupt" signal, it's all done and tucked away already. Again, so much more flexible than a conventional CPU.
If every programmer could dedicate a processor to these tasks that normally require an interrupt then that is what they would do if they were smart but unfortunately they don't have any extra processors to dedicate and they are none the wiser as they only know one way of doing it, like lining up at a public phone vs having your own. I know who will get the first call in without interrupting "anybody" else.
A COG can do the job of a hardware UART up to the point where it needs to communicate with the "main processor" that data is available for processing. With a hardware UART this can be done by interrupting the main processor. With the Propeller, this has to be done by the main processor polling a mailbox or some global variable to determine that data is ready. As Heater pointed out, this can be handled by making the main process event driven but it still requires the event loop to poll the mailboxes to find out of data is available. This polling takes some cycles and also means that the main processor can't go into low power state waiting for these events.
Doesn't it just have "soft interrupts" if you want them just like it has "soft peripherals" in all type and combinations, if you want them.
Park a COG on a pin and that PIN becomes your Interrupt pin. Event happens, pin changes, COG runs and does it's assigned soft interrupt tasks and goes back to sleep on the pin. OK,so it wasn't disruptive to the other COGs; to the outside world, it looked like an interrupt handler. You can even have multiple #1 priority interrupts (until you run out of COGs) before you need to start thinking about prioritizing interrupts and worrying about ISR response times and such.
If folks get hung up on having the identical mechanics in place in order to compare solutions then that is their loss. In most anything, your chosen solution is just A way, not the ONLY way!
I disagree with that analogy because when a cog is handling what would otherwise be handled by an interrupt on another MCU platform, nothing is being interrupted. It's more like a MCU that goes to sleep and is woke up from an external event. Not the same as an interrupt though.
With interrupts you also get some extra overhead in that the CPU (the interrupt handler code) has to stash away the current register content to memory somewhere, before it can handle the interrupt itself. When it's finished it'll have to restore those registers from memory again. And call a return-from-interrupt instruction. This adds a lot of overhead compared to just hanging on a pin until something happens (the Propeller way) and off you go. No overhead.
(A few CPUs have shadow registers, or several register sets, or a slightly slower variant - a register file, all to avoid having to save/restore registers to memory. But there's still overhead even there.)
The 65C02 CPU actually has an extra feature, the WAI instruction, which almost works as a single-pin version of the Propeller way: It can hang on a pin waiting for something to happen (sleeping all the while). If you save your registers in advance you can go straight on to execution, no overhead. Note that the reference calls it "ultra-fast interrupt service". Well, that's what you have on the Propeller all the time..
You're not comparing apples to apples. Taking the example of a UART, the "hang on a pin" handles bit level events that are completely handled in hardware in a hardware UART. They don't generate interrupts. In fact, most UARTS don't even interrupt at the end of each character but instead buffer many characters in a FIFO and interrupt only when the FIFO is getting full. This is the process that is handled on the Propeller by mailboxes or shared hub variables. Polling these variables consumes cycles and prevents the processor from going into a low power mode where waiting for an interrupt does not.
I don't agree that I'm not comparing apples to apples. I described interrupt handlers vs. waiting for an event. You are comparing MCUs with built-in UARTS vs. MCUs without. That's a different issue. If you instead compare a CPU with typical interrupt support connected to an external UART vs. a Propeller connected to an external UART then you are comparing apples to apples. That the Propeller doesn't have built-in peripherals is a completely different issue. It's another of those Propeller-specific design choices. If you compare the Propeller with another MCU without UARTs built-in then you would have to bit-bang on that one too, and then you would again be comparing apples to apples.
I don't agree that I'm not comparing apples to apples. I described interrupt handlers vs. waiting for an event. You are comparing MCUs with built-in UARTS vs. MCUs without. That's a different issue. If you instead compare a CPU with typical interrupt support connected to an external UART vs. a Propeller connected to an external UART then you are comparing apples to apples. That the Propeller doesn't have built-in peripherals is a completely different issue. It's another of those Propeller-specific design choices. If you compare the Propeller with another MCU without UARTs built-in then you would have to bit-bang on that one too, and then you would again be comparing apples to apples.
I suppose you're right that "soft peripherals" enter into this comparision as well as interrupts. However, that is usually considered a signficant strength of the Propeller. I doubt anyone would connect an external UART to a Propeller or any MCU for that matter. With the Propeller you'd just use a serial object and with any other MCU you'd use an internal UART.
I'm sure this isn't an original idea but what if you waste one pin to be a signal to the main COG that it should check its mailboxes. Then every soft peripheral can post data in its mailbox and toggle that pin when it has data available. The main COG just does a waitpeq on that pin to wait for an event to occur. Since it's in a waitpeq it goes into low power mode. When the pin is set it will wake up and then it can check all of the mailboxes for each soft peripheral. That's kind of a waste because only one mailbox is likely to have data but it's better than polling all of them constantly. Then, as Heater recommended, the main processor can be event driven and call a handler for whatever mailboxes indicate activity. When done it just goes back to the waitpeq and back to low power mode. Is that a plausible architecture?
This thread has devolved into an excellent example of the basic problem of which the Propeller documentation is but a symptom.
Case in point - Interrupt and Von Neumann architecture are both formal terms that are well defined and well understood in the science of computer architecture. Their definitions are not the subject of debate and the tests for whether a given architecture is Von Neumann or supports Interrupts is trivial.
The architecture of the processor cores within the Propeller SOC is Von Neumann - it provides for a stored program that is co-resident with the data store; instructions and data reside within a homogenous store with uniform addressing, there is a program counter, a facility for IO and an external mass storage.
The architecture of the processor cores within the Propeller SOC does not support Interrupts. Period. While there are many approaches to emulating or simulating interruptions, the fundamentals of prioritized, asynchronous, context-preserving transfers of execution control and facilities for restoring context and returning to the point of interruption are simply not present within the architecture.
Among the documentation disasters is that there's not even a way to distinguish in a formal way (meaning with a name) the architecture of the SOC from the architecture of the cores within the SOC. This lack of formality, while seemingly trivial in many cases, often makes detailed discussion close to impossible because no one actually knows what anyone means when they talk about Propeller. While the congeniality and colloquialisms often seem "nice", they actually do a huge disservice to anyone actually trying to precisely understand, discuss or describe the operation of the device. And the key thing about processors is that they are inordinately (absurdly) precise.
That the manual is devoid of formal terms such as Fetch, Execute and Retire and fails to differentiate between Operands and Accesses makes it impossible to precisely understand. Similarly, the Spin language documentation is devoid of formal terms such as Definition, Activation, Formal Parameter, Actual Parameter, Local, Static, Global and many, many more of the terms used to describe the syntax, semantics, composition and execution of software.
While it may be cool for a hobbist to just "experiment and see what it does", that's not a formula for success with hard real-time, embedded control systems - the very thing at which Propeller excels - in the commercial world. What's required are formal definitions, rigorous specifications and comprehensive verification and validation of devices to those specifications. Telling someone like me to "just look at the RTL" is exactly the wrong answer because the device isn't built from that RTL and there's never been a rigorous verification process to ensure the RTL performs in the exact same manner as the device. And neither is the manual.
And don't even start with suggesting that a formal definition is beyond the capabilities of a "small family company". A formal specification would be far, far less effort and a much smaller document than the existing manual.
Is the Parallax approach good for hobbyist? I'd say maybe but I'm not really qualified to answer. But certainly Arduino and other hobbyist platforms have larger market success and that ought to suggest something.
Is it good for education? I'd suggest it actually does a huge disservice to those who get their first hardware and software experience in an environment where they become steeped in a taxonomy that is well removed from that which is well understood in the world of hardware and software. Rather like trying to become a scientist after studying the Bible in an evangelical high school - possible but senselessly difficult to study science when your frame of reference is that the universe is but 6000 years old or computer science when you believe that interrupts co-exist in a world of sequential execution.
Is it good for someone building commercial product? Absolutely, unequivocally not.
I'm sure this isn't an original idea but what if you waste one pin to be a signal to the main COG that it should check its mailboxes. Then every soft peripheral can post data in its mailbox and toggle that pin when it has data available. The main COG just does a waitpeq on that pin to wait for an event to occur. Since it's in a waitpeq it goes into low power mode. When the pin is set it will wake up and then it can check all of the mailboxes for each soft peripheral. That's kind of a waste because only one mailbox is likely to have data but it's better than polling all of them constantly. Then, as Heater recommended, the main processor can be event driven and call a handler for whatever mailboxes indicate activity. When done it just goes back to the waitpeq and back to low power mode. Is that a plausible architecture?
In thinking about this more, the signal couldn't be a pin going high because once a soft peripheral COG sets it high, there won't be any way for the main COG to set it low again. Maybe this could be done with a pullup on the pin and using DIRA to toggle the pin rather than OUTA? Umm, no that won't work either. I guess you need a shared latch. Could a lock be used? I need to give this more thought.
I don't know that we've ever defined the Propeller as a system on chip.
The next one may be though.
BTW: I found most of the information provided reasonable to understand. There were some gaps, particularly early on, but that didn't last too long.
A whole lot of the "documentation" was, and remains in the form of code and comments. Frankly, I like this. It's most of the documentation I got early on in computing.
Now, I don't disagree with you. A more detailed, formal treatment would be good. But it's not like what happened was bad, so let's be clear on that.
Frankly, I second the call for you to publish insights you got clarified. We all could benefit .
The trouble with conventional interrupts is that they....well....interrupt whatever is happening, and it could be important or critical or just plain inconvenient but the interrupt is important too. However the conventional CPU has to stop what it's doing, save any important status flags and it's PC and other registers maybe etc before vectoring off in the new direction. All that overhead each and every time.
With the Propeller we can't actually interrupt whatever a cog is doing although a spare cog can be sitting in reserve just waiting for that important thing that needs to be done right away, So the spare cog that has been setup to respond to that important signal is like a personal butler ready to do whatever it needs to do without having to drop something else and run over. This is so much better than having a single core and having to chop and change all the time. The end result is the same, the signal gets looked after, but the Propeller offers that personal service via each signals own private cog.
Also conventional CPUs have a rich peripheral set such as UARTS, I2C, SPI, ADC, TIMERS etc which are typically serviced by the interrupts which will grab and process and store the data or whatever. Not so the Propeller as it doesn't have these peripherals really but that private cog doesn't just sit around twiddling it's thumbs, it emulates the UART for instance and also processes the data and stores it etc and since it did then there is no actual need to request an "interrupt" signal, it's all done and tucked away already. Again, so much more flexible than a conventional CPU.
If every programmer could dedicate a processor to these tasks that normally require an interrupt then that is what they would do if they were smart but unfortunately they don't have any extra processors to dedicate and they are none the wiser as they only know one way of doing it, like lining up at a public phone vs having your own. I know who will get the first call in without interrupting "anybody" else.
AWESOME info!!! So.. um....how do you organize your MCu resources in Tachyon?
I can not catch up with what is going on here! Maybe I use the wrong words. I read the documentation of the Propeller and to me it is absolutely sufficient. I believe I am able to think and understand and to grasp concepts. I accept that ksldt faces some problems. So do I. But is time invested here not wasted? Trigger an interrupt and forget the return. There is so much important work in the forum do do that capable propeller heads should not waste time. Dot.
I can not catch up with what is going on here! Maybe I use the wrong words. I read the documentation of the Propeller and to me it is absolutely sufficient. I believe I am able to think and understand and to grasp concepts. I accept that ksldt faces some problems. So do I. But is time invested here not wasted? Trigger an interrupt and forget the return. There is so much important work in the forum do do that capable head should not waste time. Dot.
I think the Propeller documentation is fine. What takes a while is to get a feel for how one does things on the Propeller. It isn't the same as other processors and in many ways it is better. In any case, some things are done differently and you have to get a handle on that. Maybe that's what is missing, a way to map paradigms from more traditional processors onto the Propeller. I don't me something like emulating interrupts. I mean learning the way to handle events using the native features of the Propeller rather than trying to make it look like a traditional processor.
I do admire your pursuit of absolute rigor and academic/scientific excellence.
I do agree that documentation can always be improved.
I did once miss a rigorous definition of the Spin language when I tried to write an assembler for PASM. Never mind Spin itself, you have to be able to parse the whole Spin/PASM combo to make sense of the constant definitions and the expressions used in DAT initializations etc. There is not even a BNF description of the Spin syntax for God's sake!
However, you are showing a distinct lack of experience of the use of real processors and MCU's in the real world. Or indeed many lesser components.
For example: The Boeing 777 is a "fly by wire" plane. The Primary Flight Computers that monitor the sensors and control the flight surfaces are safety critical components. Those PFC's are built out of Motorola 68000, Intel 486, and AMD 29K processors.
Having worked on that system I am very sure that not every quirk and bug in the 486, for example, was known or understood when undertaking that design. Intel processors have always had many "features". Ever hear of the "F00F" bug in the Pentium? Or the fact that if you multiply by an immediate constant that happens to be negative on a 286 you will always get the wrong answer?
At some point you have to forego the analysis and build something. Otherwise you will be there forever.
We are still waiting with interest for your description of what incorrect or missing documentation was causing you problems. I'm sure all Propeller users could benefit from it.
By the way. Who on Earth cares how you classify a Propeller or anything else? Von Neumann or not makes no odds. What matters is what it does. Not everything has to fit neatly into one classification or another. In fact most things don't.
Similarly for interrupts. For sure we can devise code that has parts hanging one WAITxx instructions such that the whole ensemble behaves exactly as if it had interrupts.
Nobody actually wants interrupts, as a thing in themselves. What they want is the work and effect they can achieve. Interrupts is one way, multiple cores is another, a very fast CPU and polling is yet another.
We look forward to your "unspecified behaviors of the Propeller" descriptions.
I disagree with that analogy because when a cog is handling what would otherwise be handled by an interrupt on another MCU platform, nothing is being interrupted. It's more like a MCU that goes to sleep and is woke up from an external event. Not the same as an interrupt though.
Like for a Z80 peripheral, set up the interrupt vectors, and execute a HALT instruction.
What's so different?
Is the Parallax approach good for hobbyist? I'd say maybe but I'm not really qualified to answer. But certainly Arduino and other hobbyist platforms have larger market success and that ought to suggest something.
Pure marketing...
Is it good for someone building commercial product? Absolutely, unequivocally not.
There seems to be a lot of people who disagree with that.
And as an engineer and science oriented Christian I find your dig at my faith to be out of place.
I'm not asking anything other than opinions as to whether my pin-driven scheme for avoiding a polling loop is viable. As it turns out, it isn't. There would need to be a latch external to the chip as far as I can tell and maybe as many as three pins instead of just one. However, I think I remember this being discussed before and someone with a lot more hardware knowledge than I have may remember the optimal solution and I'm anxious to hear it.
Like for a Z80 peripheral, set up the interrupt vectors, and execute a HALT instruction.
What's so different?
So David is asking - what then?
Perhaps I misunderstood...when the Z80 executes a HALT all processing stops. This is not so on the Propeller chip. When a cog is waiting for a pin state (including WAITPNE/WAITPEQ) the other cogs continue to perform their tasks uninterrupted by the cog, even when the pin state has triggered the cog to take some action. The same cannot be said for the Z80 and mind you, I spent 10 years doing Z80 development including using vectored interrupts. My NMI locked into the 60 Hz from the AC line (borrowed idea from Commodore).
Call me stupid but I don't see anyone "digging" anyones faith in Christ around here. Certainly not in this thread.
If your faith in Christ, or whatever, is strong then it matters not if anyone "digs" at it. Hardly worth a mention.
I'm really not sure what any of that has to do with interrupts in computer architectures. Or good if the documentation is good or not.
OK, whilst we are here. I find it hard to keep up with this interminable debate about interrupts.
Seems to me that nobody actually want interrupts. As a thing of desire to have. No, what they want is what interrupts make possible, especially when working with a single CPU machine.
Multiple cores, hardware threading, and I don't know what, are other ways to achieve what can be done with good old fashioned crude interrupts.
The debate should not be about "we should have interrupts on the Propeller" it should be about what little improvements can we make to the multi-core architecture to achieve the same ends?
Because, I believe that adding full up, multi-level, prioritized interrupt handling to a Propeller is tying to make two solutions to the same problem which is not just redundant but adding unnecessary complexity and expense. Not just for the silicon implementation but for the user.
@David,
Your pin driven scheme for avoiding polling is totally valid. As long as you can afford to sacrifice the pin, It was used to good effect by Linus Akesson in his amazing video and sound demo on the Propeller. Which sadly I cannot find a link to now.
What does BOEn do?
RESn is defined to be an IO pin on page 15 of the manual - when is it driven?
What happens on pins P31..P28 when RESn is de-asserted? Really. There's 1.65 seconds of undocumented behavior right there.
On page 375, the manual states the "Z is always cleared" by the WRLONG instruction. See also WRBYTE, WRWORD. Is it? Hmmm ... no.
Do BYTEMOVE/WORDMOVE/LONGMOVE deal with overlapping source/destination?
What does BOEn do?
RESn is defined to be an IO pin on page 15 of the manual - when is it driven?
What happens on pins P31..P28 when RESn is de-asserted? Really. There's 1.65 seconds of undocumented behavior right there.
Did you happen to look at the Propeller chip datasheet?
The real question about Interrupts is about resource utilization. Is the dedication of an entire core a reasonable approach to servicing a medium-frequency, low-latency event?
The answer is too application specific to generalize - in some instances yes and in others no. But generally speaking, the Propeller approach is expensive in terms of area vs. utilization tradeoffs.
The discussion about Interrupt driven implementations being more complex is nonsense - any approach to servicing an event with low-latency is fraught with complexity. Whether that's dealing with reentrancy problems associated with interrupts or managing path length between polling in a cooperative system - the complexity is equivalent. However, it's far simpler to create a framework in which Interrupts are predictable and reliable than it is to create a framework in which code path lengths are predictable and reliable. And, if you don't want an interrupt driven system just don't use that capability. But if you need it and its not there, you're SOL.
@Heater http://www.linusakesson.net/scene/turbulence/
That is my favorite graphics demo for the propeller. I can never draw anything that fast on the propeller (and yes, I use assembly), but it's probably because I turn the detail level up way too high. I'm jealous of how much faster his mandelbrot renderer in turbulence is than mine, I should probably look at the source to see how he did it.
Previously you have said that you have been in email communication with Parallax and gotten these issues resolved. I was rather hoping to hear what the issues were and especially what the resolutions were. Could be useful information to all of us.
The discussion about Interrupt driven implementations being more complex is nonsense - any approach to servicing an event with low-latency is fraught with complexity.
Do read up on the extensive literature that points out that interrupts make program behavior impervious to analysis. That is to say, too complex to reason about.
Do read up on Communicating Sequential Processes (CSP) as introduced by Tony Hoare in 1978 and subsequently implemented in hardware and software in the Transputer and it's Occam language. And later the XMOS devices and the XC language.
There are less complex ways to do what is required than interrupts. Ways that can actually be reasoned about.
It's far simpler to create a framework in which Interrupts are predictable and reliable...
We have to stop there. Interrupts by their very nature are not predictable.
If you don't want an interrupt driven system just don't use that capability. But if you need it and its not there, you're SOL.
Not so. There are other better ways to get the thing that you want from interrupts without actually using the crude interrupt on a single CPU mechanism.
The hardware threaded and multi-core, event driven, XMOS devices show that to be true.
Having said all that. I will admit that possibly the current Propeller does not provide everything we need to totally avoid that lingering desire for interrupts. It's pretty close though, for such a simple device.
Okay, this is getting off topic and turning into a debate/argument on something (architecture) that is not going to change. If you need interrupts, you can obtain the propeller files (now open) and an FPGA board and create a Propeller with interrupts. Otherwise we're spinning our wheels here needlessly.
Okay, this is getting off topic and turning into a debate/argument on something (architecture) that is not going to change. If you need interrupts, you can obtain the propeller files (now open) and an FPGA board and create a Propeller with interrupts. Otherwise we're spinning our wheels here needlessly.
The original post was about Propeller architecture so I don't the discussion is too off topic. Also, the original poster brought up the question of interrupts so I don't think that is off topic either. It seems perfectly valid to me to discuss how someone familiar with a more traditional architecture would do similar things on the Propeller and also how those things could be done in a Propeller-like fashion rather than just trying to emulate an incompatible architecture.
Comments
(A few CPUs have shadow registers, or several register sets, or a slightly slower variant - a register file, all to avoid having to save/restore registers to memory. But there's still overhead even there.)
The 65C02 CPU actually has an extra feature, the WAI instruction, which almost works as a single-pin version of the Propeller way: It can hang on a pin waiting for something to happen (sleeping all the while). If you save your registers in advance you can go straight on to execution, no overhead. Note that the reference calls it "ultra-fast interrupt service". Well, that's what you have on the Propeller all the time..
I disagree with that analogy because when a cog is handling what would otherwise be handled by an interrupt on another MCU platform, nothing is being interrupted. It's more like a MCU that goes to sleep and is woke up from an external event. Not the same as an interrupt though.
You mean like these?
https://www.youtube.com/watch?v=Uk_vV-JRZ6E
https://www.youtube.com/watch?v=PF7EpEnglgk
I don't agree that I'm not comparing apples to apples. I described interrupt handlers vs. waiting for an event. You are comparing MCUs with built-in UARTS vs. MCUs without. That's a different issue. If you instead compare a CPU with typical interrupt support connected to an external UART vs. a Propeller connected to an external UART then you are comparing apples to apples. That the Propeller doesn't have built-in peripherals is a completely different issue. It's another of those Propeller-specific design choices. If you compare the Propeller with another MCU without UARTs built-in then you would have to bit-bang on that one too, and then you would again be comparing apples to apples.
This thread has devolved into an excellent example of the basic problem of which the Propeller documentation is but a symptom.
Case in point - Interrupt and Von Neumann architecture are both formal terms that are well defined and well understood in the science of computer architecture. Their definitions are not the subject of debate and the tests for whether a given architecture is Von Neumann or supports Interrupts is trivial.
The architecture of the processor cores within the Propeller SOC is Von Neumann - it provides for a stored program that is co-resident with the data store; instructions and data reside within a homogenous store with uniform addressing, there is a program counter, a facility for IO and an external mass storage.
The architecture of the processor cores within the Propeller SOC does not support Interrupts. Period. While there are many approaches to emulating or simulating interruptions, the fundamentals of prioritized, asynchronous, context-preserving transfers of execution control and facilities for restoring context and returning to the point of interruption are simply not present within the architecture.
Among the documentation disasters is that there's not even a way to distinguish in a formal way (meaning with a name) the architecture of the SOC from the architecture of the cores within the SOC. This lack of formality, while seemingly trivial in many cases, often makes detailed discussion close to impossible because no one actually knows what anyone means when they talk about Propeller. While the congeniality and colloquialisms often seem "nice", they actually do a huge disservice to anyone actually trying to precisely understand, discuss or describe the operation of the device. And the key thing about processors is that they are inordinately (absurdly) precise.
That the manual is devoid of formal terms such as Fetch, Execute and Retire and fails to differentiate between Operands and Accesses makes it impossible to precisely understand. Similarly, the Spin language documentation is devoid of formal terms such as Definition, Activation, Formal Parameter, Actual Parameter, Local, Static, Global and many, many more of the terms used to describe the syntax, semantics, composition and execution of software.
While it may be cool for a hobbist to just "experiment and see what it does", that's not a formula for success with hard real-time, embedded control systems - the very thing at which Propeller excels - in the commercial world. What's required are formal definitions, rigorous specifications and comprehensive verification and validation of devices to those specifications. Telling someone like me to "just look at the RTL" is exactly the wrong answer because the device isn't built from that RTL and there's never been a rigorous verification process to ensure the RTL performs in the exact same manner as the device. And neither is the manual.
And don't even start with suggesting that a formal definition is beyond the capabilities of a "small family company". A formal specification would be far, far less effort and a much smaller document than the existing manual.
Is the Parallax approach good for hobbyist? I'd say maybe but I'm not really qualified to answer. But certainly Arduino and other hobbyist platforms have larger market success and that ought to suggest something.
Is it good for education? I'd suggest it actually does a huge disservice to those who get their first hardware and software experience in an environment where they become steeped in a taxonomy that is well removed from that which is well understood in the world of hardware and software. Rather like trying to become a scientist after studying the Bible in an evangelical high school - possible but senselessly difficult to study science when your frame of reference is that the universe is but 6000 years old or computer science when you believe that interrupts co-exist in a world of sequential execution.
Is it good for someone building commercial product? Absolutely, unequivocally not.
The next one may be though.
BTW: I found most of the information provided reasonable to understand. There were some gaps, particularly early on, but that didn't last too long.
A whole lot of the "documentation" was, and remains in the form of code and comments. Frankly, I like this. It's most of the documentation I got early on in computing.
Now, I don't disagree with you. A more detailed, formal treatment would be good. But it's not like what happened was bad, so let's be clear on that.
Frankly, I second the call for you to publish insights you got clarified. We all could benefit .
AWESOME info!!! So.. um....how do you organize your MCu resources in Tachyon?
I do admire your pursuit of absolute rigor and academic/scientific excellence.
I do agree that documentation can always be improved.
I did once miss a rigorous definition of the Spin language when I tried to write an assembler for PASM. Never mind Spin itself, you have to be able to parse the whole Spin/PASM combo to make sense of the constant definitions and the expressions used in DAT initializations etc. There is not even a BNF description of the Spin syntax for God's sake!
However, you are showing a distinct lack of experience of the use of real processors and MCU's in the real world. Or indeed many lesser components.
For example: The Boeing 777 is a "fly by wire" plane. The Primary Flight Computers that monitor the sensors and control the flight surfaces are safety critical components. Those PFC's are built out of Motorola 68000, Intel 486, and AMD 29K processors.
Having worked on that system I am very sure that not every quirk and bug in the 486, for example, was known or understood when undertaking that design. Intel processors have always had many "features". Ever hear of the "F00F" bug in the Pentium? Or the fact that if you multiply by an immediate constant that happens to be negative on a 286 you will always get the wrong answer?
At some point you have to forego the analysis and build something. Otherwise you will be there forever.
We are still waiting with interest for your description of what incorrect or missing documentation was causing you problems. I'm sure all Propeller users could benefit from it.
By the way. Who on Earth cares how you classify a Propeller or anything else? Von Neumann or not makes no odds. What matters is what it does. Not everything has to fit neatly into one classification or another. In fact most things don't.
Similarly for interrupts. For sure we can devise code that has parts hanging one WAITxx instructions such that the whole ensemble behaves exactly as if it had interrupts.
Nobody actually wants interrupts, as a thing in themselves. What they want is the work and effect they can achieve. Interrupts is one way, multiple cores is another, a very fast CPU and polling is yet another.
We look forward to your "unspecified behaviors of the Propeller" descriptions.
Like for a Z80 peripheral, set up the interrupt vectors, and execute a HALT instruction.
What's so different?
So David is asking - what then?
And as an engineer and science oriented Christian I find your dig at my faith to be out of place.
Standards
Perhaps I misunderstood...when the Z80 executes a HALT all processing stops. This is not so on the Propeller chip. When a cog is waiting for a pin state (including WAITPNE/WAITPEQ) the other cogs continue to perform their tasks uninterrupted by the cog, even when the pin state has triggered the cog to take some action. The same cannot be said for the Z80 and mind you, I spent 10 years doing Z80 development including using vectored interrupts. My NMI locked into the 60 Hz from the AC line (borrowed idea from Commodore).
Call me stupid but I don't see anyone "digging" anyones faith in Christ around here. Certainly not in this thread.
If your faith in Christ, or whatever, is strong then it matters not if anyone "digs" at it. Hardly worth a mention.
I'm really not sure what any of that has to do with interrupts in computer architectures. Or good if the documentation is good or not.
OK, whilst we are here. I find it hard to keep up with this interminable debate about interrupts.
Seems to me that nobody actually want interrupts. As a thing of desire to have. No, what they want is what interrupts make possible, especially when working with a single CPU machine.
Multiple cores, hardware threading, and I don't know what, are other ways to achieve what can be done with good old fashioned crude interrupts.
The debate should not be about "we should have interrupts on the Propeller" it should be about what little improvements can we make to the multi-core architecture to achieve the same ends?
Because, I believe that adding full up, multi-level, prioritized interrupt handling to a Propeller is tying to make two solutions to the same problem which is not just redundant but adding unnecessary complexity and expense. Not just for the silicon implementation but for the user.
@David,
Your pin driven scheme for avoiding polling is totally valid. As long as you can afford to sacrifice the pin, It was used to good effect by Linus Akesson in his amazing video and sound demo on the Propeller. Which sadly I cannot find a link to now.
What does BOEn do?
RESn is defined to be an IO pin on page 15 of the manual - when is it driven?
What happens on pins P31..P28 when RESn is de-asserted? Really. There's 1.65 seconds of undocumented behavior right there.
On page 375, the manual states the "Z is always cleared" by the WRLONG instruction. See also WRBYTE, WRWORD. Is it? Hmmm ... no.
Do BYTEMOVE/WORDMOVE/LONGMOVE deal with overlapping source/destination?
Did you happen to look at the Propeller chip datasheet?
The answer is too application specific to generalize - in some instances yes and in others no. But generally speaking, the Propeller approach is expensive in terms of area vs. utilization tradeoffs.
The discussion about Interrupt driven implementations being more complex is nonsense - any approach to servicing an event with low-latency is fraught with complexity. Whether that's dealing with reentrancy problems associated with interrupts or managing path length between polling in a cooperative system - the complexity is equivalent. However, it's far simpler to create a framework in which Interrupts are predictable and reliable than it is to create a framework in which code path lengths are predictable and reliable. And, if you don't want an interrupt driven system just don't use that capability. But if you need it and its not there, you're SOL.
http://www.linusakesson.net/scene/turbulence/
That is my favorite graphics demo for the propeller. I can never draw anything that fast on the propeller (and yes, I use assembly), but it's probably because I turn the detail level up way too high. I'm jealous of how much faster his mandelbrot renderer in turbulence is than mine, I should probably look at the source to see how he did it.
Where do you begin?
Previously you have said that you have been in email communication with Parallax and gotten these issues resolved. I was rather hoping to hear what the issues were and especially what the resolutions were. Could be useful information to all of us. Do read up on the extensive literature that points out that interrupts make program behavior impervious to analysis. That is to say, too complex to reason about.
Do read up on Communicating Sequential Processes (CSP) as introduced by Tony Hoare in 1978 and subsequently implemented in hardware and software in the Transputer and it's Occam language. And later the XMOS devices and the XC language.
There are less complex ways to do what is required than interrupts. Ways that can actually be reasoned about. We have to stop there. Interrupts by their very nature are not predictable. Not so. There are other better ways to get the thing that you want from interrupts without actually using the crude interrupt on a single CPU mechanism.
The hardware threaded and multi-core, event driven, XMOS devices show that to be true.
Having said all that. I will admit that possibly the current Propeller does not provide everything we need to totally avoid that lingering desire for interrupts. It's pretty close though, for such a simple device.
Thanks, I have been trying to find Linus Akesson's demo all evening. Does not help that I didn't remember his surname or the name of the demo.
It's totally awesome and shows what can be done on a machine that is so stupid it doesn't even have interrupts
Nor a multiply instruction!