propeller and interruptions ?
henrib75
Posts: 9
in Propeller 1
I have a beginner's question. How do you simulate interruptions with the propeller? I love the philosophy of this microcontroller, but I can not do without interruptions in my head when I develop ... thanks !
Comments
It really depends on how precise the timing needs to be ... some other techniques/combinations are:
- Synchronising the instructions to match the timing requirements.
- Synchronous instructions can be even extended to removing the WAITs which provides top throughput but is obviously the trickiest to fine tune. Many VGA displays are done this way.
- Polling multiple conditions as a general loop can work when the demands are low. This is the easiest multitasking to comprehend.
- Polling in a simplistic cooperative task switching arrangement can be very effective. JMPRET is used heavily here I think.
Why do you need interrupts?
Well, only so that some code gets run, as soon as possible, in response to some event happening. Usually an external event that changes an input pin.
Ah, so you don't actually want interrupts at all. What you want is some code to get run in response to an event.
It just so happens that a Propeller has 8 processors, COGS. Code running on any of those COGs can halt execution and wait on changes on input pins. The WAITxx instructions.
So, just put the code that you want to run in response to an event on one of those COGs and have it do a WAITxx that triggers it's execution when a pin changes.
Boom, problem solved. No interrupts required. Much easier and faster, than using interrupts.
Peter, evanh, and Heater: Thanks for the explanations. I have had to answer this exact question to a few folks that don't understand why I like the Propeller so much. I have never had a really clean answer, but summarizing your three explanations together makes it so clear to me. In the past, I have just said that I can't understand why I have to stop doing one thing to do something else on a microcontroller. I first ran in to this struggle when I tried to replicate a Cypress PSOC project 4 years ago and just couldn't grasp why the 200 lines of main code had to halt the serial port while it drove a servo. This drove me nuts. My equivalent program on the Propeller just used 2 objects from the Propeller Tool's library running in 3 cogs total and my main code was around 20 lines.
Think of it in terms of working?
Would you prefer a bunch of bosses "Interrupting" you with demands, or a group of Workers under your command?
That's the difference between the way other processors work and the Propeller. Instead of dividing time between many requests you either assign jobs as they are needed or assign dedicated tasks to each Cog.
For example, Serial communication is often dedicated to a Cog since it's time critical and it just runs the same program over and over forever.
Your main program never has to worry about what the Serial Cog is doing, only that data has arrived or needs to be sent.
The Serial Cog knows nothing about what the main program is doing, only that Serial data has arrived or data has shown up in the Send buffer to be sent out.
If you truly want to simulate an interrupt on the Propeller, then as mentioned you could dedicate a COG to monitor events EXTERNAL and/or INTERNAL that would signal the other COGS to halt operation until the interrupt flag had been cleared.
There is nothing scary about interrupts, you just have to understand them and what they are doing. Personally I use interrupts all of the time in the PIC micro world and have no issues.
Having had to fix many bugs in interrupt driven software over the decades it's clear that programmers have a very hard time reasoning about what they are doing. These bugs have a habit to be rarely occurring, timing related glitches that are hard to find, reproduce and debug.
Just yesterday I was reading how some device driver does not work with the Linux kernel with real time patches due to some race condition or other. Seems programmers far more skillful than me still get this wrong.
Juggling an interrupt or two on a PIC is fine. But the bottom line is there is only one processor, each interrupt steals time from other things you want to do, introduces latency and slows processing. Destroys deterministic execution.
I discovered that it is possible to stop the Linux kernel from scheduling processes on particular cores on a Raspberry Pi. With that and the ability to map I/O registers into user memory it is possible to take over a core and bit-bang on the I/O without wasting any time in the kernel. A toggle rate of up to 60MHz. Probably more if I wrote the code in assembler.
Except...there is still some pesky Linux interrupt being handled by my dedicated core. Which knocks 10us holes in my bit-banged output stream. So far I have not managed to find out what it is or how to stop it hitting my core.
Interrupts....grrrr...
Thank you for your answers.
Why do you need interrupts?
Good question ... that a cog monitors a condition and triggers a code, that's one thing. But for me it starts to become more complex (and less "clean") if this same condition must also stop a code running already, and restart where it was once the code of the first cog completed.
The idea is to execute a code OR another depending on the interruption. But I can not always make a big loop that tests continuously, because if a piece of code is long to execute, the test may miss the signal or treat it too late.
Am I clear?
henrib75, Welcome to the forums!
Thank you ! But I have been here since 2011 ... :-)
They are second nature to those who have been hacking low level code for decades.
Exactly!
Sorry for my english but i'm french (thanks google translation) ...
I do not understand though what you are trying to do but it should always be far easier to do with multiple cogs than with interrupts. Do you have a simple flowchart or outline of what you are trying to do or perhaps just describe it in enough detail for us so that we will be able to explain how it could be done with the Propeller. The answer I suspect though will be that simple that you will probably smack your head and exclaim "but of course"!
- One is the cooperative switching. This involves inserting a jump that stores it's return point. JMPRET is built for this very purpose. This is not unlike a call/return but instead of jumping to a subroutine you are jumping to an alternative task. You put the JMPRET at strategic inline points of all concerned tasks so that all tasks are always being monitored for state change.
- The other way is what I'll call curated execution. This would be something like a repeating loop that only does one iteration and exits without finishing all iterations, with the knowledge it will be re-executed again very soon. This then is part of the main big loop, with no one function consuming much time in any one moment.
However, for the past nearly 10 years I have been coding the prop and I have never needed interrupts
I find I code a whole lot more in the routines that would have been done in an interrupt, making the main code routine simpler, shorter, and mostly not even time sensitive, meaning that often my main routine can be done in spin which saves me programming time too! Add to this, each piece of code is simpler meaning less chances for bugs.
Yes, pretty clear. I think at this point we need to know more details before being able to sensibly advise:
1) What is the minimum length of this input pulse that you are afraid of missing?
If it is going to me something really short then the best thing is to dedicate a COG to waiting on that pin or pins. Then setting some flags to indicate what code should be running.
2) Is there any externally visible response to that interrupt that has to be output within some short time? That is to say what is the latency requirement here?
3) What language are your writing this in: Spin, PASM, C, other?
One brutal approach to this is to have the event handling COG stop and start a COG in response to a pin change. When a pin event indicates a change of operating "mode" is required, the event handling COG does a COGSTOP on whatever COG is running the code that should be stopped, then a COGSTART to get the code for the new mode running.
In a typical design I create a flow chart where each node is assigned an index number and is designed to fall-through in code where a dispatch handler directs program execution based on the index value. Multiple indexes allow for atomic functionality or multi threaded types of operation. The flow chart is essentially a state machine where more than one state machine can be interleaved. "Real time" is considered anything that happens at 100Hz or greater. To achieve deterministic timing, the interrupt interval is set and a counter is cleared. I have it setup so that the interval that I set resolves to the number of instructions I want to wait. Upon calling the code, the interrupt is enabled allowing the counter to count. At the end of the code I want to be deterministic, it will wait until the interrupt interval has expired. That way no matter what conditional branches happen within the code snip, the time of execution will always be the same.
With multiple COGs, it's easy to throw different functionality at each one, but it's like nickle and dime-ing and adds up quickly. Do you really need an entire COG dedicated to serial communication, or motor control, or keypad, etc?
I've got a project right now that controls 3-Stepper motors at 51,200 pulses per revolution, a 5000 rpm spindle motor, Touch screen communication/control, system fluid controls, proximity sensors, head rotation motor and a few other things with the equivalent of a single COG.
I'm not ranting really, my point is and I have said this before. Multiple processors are great, but we are a long ways from programming them efficiently and it's simply a mind set or way of thinking that needs to change.
A Propeller analogy .... "Who cares if you have an 8 lane or 16 lane highway with a bus in every lane if the only person in the bus is the driver" ... I say "load the thing up" and make use of the 40 or so seats on each bus!!
Admittedly it is something of an inversion of the kind of thing people normally think of when they think interrupts. That is, a big program in a main loop with little interrupt handlers butting in. No, you can do it the other way around, put all your code into an interrupt handler or handlers, triggered by timer tick or external clock.
The most elegant approach like this that I worked with is a language called Lucol that was used in avionics systems by Lucas Aerospace. In that system all code was hung of an interrupt, 10ms, 100ms, whatever. To ensure you could not overrun your timing budget the Lucol language did not support loops. No "for", "while" or "goto" backwards in the code. The magic result of that was the compiler could report the exact maximum execution time of the complete build and warn if timing was getting tight. Which is kind of important in safety critical systems.
Of course most programmers hated to work in Lucol because it had no loops and other conveniences.... ah well.
I can totally agree that throwing a whole 32 bit CPU at a UART or keypad etc, seems very wasteful.
But it does have one major advantage....
I can take an object from you, a UART for example, some objects from somewhere else, VGA say, and create a top level program object for myself that uses all of that.
I can do this very quickly and easily because I know that any time critical parts of these "foreign" objects are going to run in total isolation in their own processor. They are not going to be fighting each other for time. There is no interrupt priority or other scheduling for me to think about. It all just works.
If the Prop were a single CPU machine with interrupts all that would be impossible. Or at least orders of magnitude harder.
In this case, the background cog simulates a 1ms interrupt event.
The project is a controller for those big -- somewhat clunky -- roads signs used around construction sites; usually flashing a pattern of building chevrons moving left or right. The HMI has six buttons that need to be debounced, and due to the nature of the code behavior, I'd also like an auto-repeat function when a button is held down.
A dirt simple background cog handles these requirements. As the code is all in Spin, I put it at the end of the main file which gives the "foreground" code access to the background variables. When a button press is detected in the foreground, that button's timer is set to a negative value which determines the auto-repeat rate.
Easy-peasy, and works well. For grins, there is also a global milliseconds timer that can be used for event timing where using the cnt register would be inconvenient.
As a mechanic, I work daily with tools that could produce serious injuries if not used properly. It's part of my job to know each tool in my toolbox and how to utilize it safely and effectively.
Interrupts are just another tool in a programmers toolbox. They can be used or avoided depending on how familiar you are with them.
Interrupts do screw up timing but most applications aren't time critical enough for this to be an issue. Humans aren't sensitive enough to notice a one millisecond delay and you can get a lot done in one millisecond.
If you have N cogs available you will eventually need N+1. Interrupts allow you to do more with the same resources. It's like having more cogs. Who wouldn't like that!
Sandy
No, the aversion isn't about interrupts, the aversion is about the idea that you need interrupts or you can't get things done.
However, the P2 has a big COG, so silicon UARTs would be many times smaller than a COG, less smarts. The P2 now has SmartPins that can relieve the COG from a great deal of work. But, these SmartPins are not small either.
For the P1 or P2 to keep the philosophy of all pins equal, we would need a lot of silicon UARTs, I2C, SPI, etc, etc. Now we have a huge number of registers just to set things up. ARM/etc her we go again.
No thanks, I am happy with the P1/P2 philosophy, although just maybe we should have thought about having a number of small COGs too for I/O processing instead of wasting massive COGs. It's too late now tho!
Which means each instance is minimal. Tor will be right, a typical full UART will be many times bigger than what is in Smartpins.
The hidden die space cost with Smartpins is the data paths between them and the Cogs. Chip removed a huge ring bus from the Prop2-Hot design that had allowed any DAC to be driven from any Cog. The removal more than doubled the space available for HubRAM at the time.
I suspect this huge bus has somewhat been reinstated with the incremental widening of Smartpin access from the Cogs. Which may explain the significant miscalculation of how many Cogs would fit.
"Do you really need to have COG dedicated to serial communication, or motor control, or keypad, etc.?"
Good remark. Because 8 cogs is a lot and a little at a time ...
"a language called Lucol"
Do you have any information on this? Google did not give me anything ...
"Still do not understand this aversion to interrupts."
Personally I have no dislike of interruptions. They were very useful. My problem is that sometimes I think again with ...
"I'm sure the propeller way is best. I guess we’ll see when to P2 goes on sale"
Well, yes, when finally?
"Why do you need interrupts?"
This is only a general question. I do not have a specific need right now.
It's very hard to find any information on Lucol now a days. It was an in-house language to Lucas Aerospace. I last worked on it around about 1990. It was a pre-internet thing. Probably gone the same way as the VAX computers we used to develop it with.
Lucol gets a brief mention in this document:
https://gcc.gnu.org/wiki/cauldron2012?action=AttachFile&do=get&target=petergarbett1958.pdf
Which is basically a sales pitch for Ada to replace such languages. Interestingly it includes a diagram of part of a control system developed in Lucol. Those diagrams were produced from the Lucol source code itself and could be used as documentation.
One team at Lucas whilst I was there did adopt Ada instead of Lucol for an avionics control system. It was a disaster. It's run time took random amounts of the 10 or 100ms time budget it had, mostly exceeding by far the 50% load the requirements demanded and often hitting 95%. I measured it with an oscilloscope. When I asked the team's project manager if he could guarantee it would never exceed the 100% time budget and hence cause failure, there was a deadly silence... they had no idea!
There is the book: "FM'99 - Formal Methods: World Congress on Formal Methods in the ..., Volume 2" `
https://books.google.fi/books?id=kUVsCQAAQBAJ&pg=PA1818&lpg=PA1818&dq=lucol+lucas+aerospace&source=bl&ots=D-gbufPF3m&sig=19DGShO7hiBZmpVBNv36nlyBHLA&hl=en&sa=X&ved=0ahUKEwjN4c-WycbYAhUEBSwKHcUrAG0Q6AEIQjAD#v=onepage&q=lucol lucas aerospace&f=false
Which has a piece on Lucol.
It talks about how formal reasoning and timing analysis is easily done with Lucol programs.
A can't for the life of me find any actual source code examples.
Edit: Lucol gets a mention in a list in this presentation on Safety Critical Software Development:
http://www.dcs.gla.ac.uk/~johnson/teaching/safety/powerpoint/10_DO178B.pdf
BINGO! This document has a lot of description of the Lucol language and it's reasons for being the way it is. There is also some example Lucol code:
https://proceedings.asmedigitalcollection.asme.org/data/Conferences/ASMEP/83943/V005T14A006-82-GT-251.pdf
Glad to to know I did not imagine that big junk of my life !