Shop OBEX P1 Docs P2 Docs Learn Events
Interrupts (no! not again!) — Parallax Forums

Interrupts (no! not again!)

Bill HenningBill Henning Posts: 6,445
edited 2015-06-23 15:08 in Propeller 1
I've recently been working on some PIC projects.

I'd rather use Props, but due to BOM cost constraints, I have no choice (also some customers insist on "code protection").

"deep diving" into PIC data sheets is NOT fun, and finding all the fiddly bits to configure peripherals, and their interrupts, can be time consuming - especially since there are so many variations of PIC's...

Anyway, just wanted to point out (once again) how much more fun, and easier, it is to develop with the prop.
«13

Comments

  • edited 2015-06-20 13:52
    Anyway, just wanted to point out (once again) how much more fun, and easier, it is to develop with the prop.

    Imagine how much broader the potential propeller market would be if, in addition to its already excellent feature set, the prop featured interrupts as well. Those who don't like them wouldn't have to use them while those who liked them would feel right at home.

    I know it would add a level of complexity to code but I can see it being very useful in some cases.

    Sandy
  • Heater.Heater. Posts: 21,230
    edited 2015-06-20 14:11
    Sandy,

    Over the years we have imagined interrupts on the Propeller. It has been suggested a billion times and discussed to death. Thankfully Chip has resisted the temptation. Even at the hight of adding layer upon layer of complexity to the last iteration of the P II design.

    I wish there were a permanent web page some place that laid out all the reasons why the Propeller does not have interrupts and why it's a bad idea to add them.

    Complexity of the code is one thing. Complexity of the hardware architecture is undesirable, it would need a stack in each COG, it would need a priority system, it would need registers to configure all that, it would probably need instructions to enable/disable interrupts.

    We will have 16 COGs in the PII. That is equivalent to 16 interrupt handlers. More than enough I believe.
  • edited 2015-06-20 14:30
    if you have a large group of people with a certain feature request versus a small group of people with an opposite feature request the best thing to do would be be accommodate the large group, especially if the large group can be accommodated without inconveniencing the small group.

    I'm not saying I would use interrupts if they were there, I probably wouldn't, but I think it's important to give people a choice and not be constrained by your perception of how people should think or how they should develop ideas.

    The original prop had 8 cogs, more than enough ( at the time ). The prop II will have 16, more than enough. How much is enough? Just a little bit more.
  • Mike GreenMike Green Posts: 23,101
    edited 2015-06-20 14:32
    @Heater,
    Ditto. I've worked with all sorts of interrupt schemes over the years. Yes, if you don't use them (disable them by default), you can leave out the not insignificant software complexity. Unfortunately, you can't leave out the hardware / architectural complexity. At the very least, you need somewhere to save the current processor state ... doesn't have to be a stack if you only have one interrupt with no priorities involved. You need some way to restore the processor state ... not too complicated if you only have one interrupt. Who's going to be satisfied with only one interrupt though? Then you need prioritization and some kind of complex state memory. A simple stack will rarely do. You'll need to control individual interrupts, etc.

    On the software side, if even a simple single interrupt is implemented, how long do you think it will be before a small vocal group of interrupt users complains mightily that the libraries don't support interrupts or maybe there'll have to be two standard libraries, one that supports interrupts and one that works with parallel cogs executing without interrupts. Sure, a lot of the library can be implemented to work either way, but that complicates the design because you have to plan for the presence of interrupts ... not a huge piece, but an added complication.

    I've mentioned this before, but, years ago, I wrote several operating systems for both small (Datapoint 1500 - Z80 based) and large (IBM 360/370) hardware where the operating system kernel was deliberately designed to hide the existence of (in this case necessary) interrupts at as low a level as possible. The rest of the operating system was written in terms of multiple parallel tasks. In both cases, the operating system was much simpler to design, write and debug than the equivalent using interrupts explicitly. Interrupts at a very low level were only used because the hardware required it. The IBM 360 architecture particularly provided some I/O status information only as part of its interrupt handling.
  • Bill HenningBill Henning Posts: 6,445
    edited 2015-06-20 14:41
    And of course the deadly part:

    Any hypothetical cog that supported interrupts would no longer be deterministic (in any shape, way, or form)
  • Heater.Heater. Posts: 21,230
    edited 2015-06-20 15:14
    Sandy,
    if you have a large group of people with a certain feature request versus a small group of people with an opposite feature request the best thing to do would be be accommodate the large group,
    Here I fundamentally disagree.

    With respect of the "interrupt feature" there are thousands of micro-controllers available, from dozens of vendors, of all shapes and sizes and covering a huge range of price and performance. Those who want interrupts, for whatever cock-eyed reason can have as many as they like already.

    Interrupts are just a kludgey hardware hack to make it appear as if you have the multiple processors you actually wanted in the first place. Born in the times when a CPU was a big and expensive thing and you would only ever have one. When you can actually have many why carry that old interrupt baggage and complexity along?

    I don't believe pandering to the masses would help sell many more Propellers. They have many reasons for not using a Propeller already. Like: "Oh, you can only run 496 instructions in a COG at full speed, how useless is that?" or "Oh, you cannot run big compiled C code at full speed, how useless is that?" And so on... The addition of interrupts will not sway them. It will not make the Prop into an ARM or PIC or AVR.

    Philosophically, thank God there is someone in the world, Chip Gracey in this case, who has a vision of how he would like things to be and does not let it get polluted by the opinions of "the masses" or the whims of the marketing department.
    ...especially if the large group can be accommodated without inconveniencing the small group.
    I do not believe this is possible.
  • Cluso99Cluso99 Posts: 18,069
    edited 2015-06-20 15:16
    Well, interrupts could be implemented in the simple extreme...

    Effectively executes a CALL to a fixed address, saving the return to another fixed address. No state saving/etc. This would be the responsibility of the Interrupt routine.

    Great, now what causes the interrupt?
    Is it an I/O pin (input mode)? Is it a low, high, or change of state?
    Is it a received character? from a UART, I2C, SPI, or something else?
    Is it a timeout?
    Is it another Cog? And how does it/they do it? Which cog(s) do they interrupt?

    What happens to the existing pipeline? Presumably, both the Interrupt CALL and Interrupt RET would flush the pipeline.

    We can do all of this without interrupts. We just do it a little differently, which is easier once you understand the ways around not using interrupts.

    And of course, the code is no longer deterministic (as Bill stated).

    No thanks! I'll stick to the simpler propeller architecture.

    BTW I used to (commercially) program an ICL mini. It had 20 cores (known as partitions back then). There were no interrupts in the instruction set. Deterministic programming was not a consideration (although I did achieve that in a very specialised disk copy program I wrote). When I/O from a terminal or disc was required, the program would just wait until the operation was complete. Of course, there was no low level programming done on this mini.
  • Hal AlbachHal Albach Posts: 747
    edited 2015-06-20 16:01
    Cluso99 said;
    "BTW I used to (commercially) program an ICL mini. It had 20 cores (known as partitions back then). There were no interrupts in the instruction set. Deterministic programming was not a consideration (although I did achieve that in a very specialised disk copy program I wrote). When I/O from a terminal or disc was required, the program would just wait until the operation was complete. Of course, there was no low level programming done on this mini."

    ICL was my former employer and was bought (taken over) by Fujitsu while I was there. The mini you refer to was the System 25, right? It was the offspring of a processor built by Singer called the System 10 which was used primarily for Point of Sale Data collection. Sears & Roebuck were the big contractors for that endeavor.
  • jmgjmg Posts: 15,173
    edited 2015-06-20 16:56
    And of course the deadly part:

    Any hypothetical cog that supported interrupts would no longer be deterministic (in any shape, way, or form)

    That does not have to be true.

    However a more natural-fit add-on to a Prop, is threads / hard time slices, which was what Chip did on the earlier P2, (and may yet re-appear in the newer P2 ?).
  • Bill HenningBill Henning Posts: 6,445
    edited 2015-06-20 17:02
    I hope it does, it is a much cleaner way of handling the issue.
    jmg wrote: »
    However a more natural-fit add-on to a Prop, is threads / hard time slices, which was what Chip did on the earlier P2, (and may yet re-appear in the newer P2 ?).
  • evanhevanh Posts: 15,919
    edited 2015-06-20 18:54
    I hope it does, it is a much cleaner way of handling the issue.

    I'm pretty confident that's thoroughly relegated to Prop3 ideas bin. HubExec has priority and I remember Chip having a fight trying to merge HubExec with it.
  • Bill HenningBill Henning Posts: 6,445
    edited 2015-06-20 19:06
    I agree, HubExec is far more important.
    evanh wrote: »
    I'm pretty confident that's thoroughly relegated to Prop3 ideas bin. HubExec has priority and I remember Chip having a fight trying to merge HubExec with it.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-06-20 19:13
    I have to second Heater here. Having a large group wanting a feature does not mean they are right about it at all.

    Following that idea often leads to a diluted product, less differentiated, and more expensive to support.
  • kwinnkwinn Posts: 8,697
    edited 2015-06-20 20:16
    I have to agree with Sandy on this issue, and here is an example of why it would be useful:
    http://forums.parallax.com/showthread.php/161256-Reading-value-of-CTRA-in-cog-0-by-another-cog

    All this type of simple interrupt requires is having a cog execute the equivalent of a jmpret instruction based on a pin, one of it's counters, or the value of cnt. Whether or not it would be deterministic would depend on the code executed by the interrupt.
  • evanhevanh Posts: 15,919
    edited 2015-06-20 20:55
    Hehe, even though I'd commented in that topic, I'd not read what he was ultimately trying to accomplish.

    The obvious way is to just use CNT and do a differential subtraction as part of his main loop. Each time a second or more has reached a threshold then add that many seconds to ETime. Pretty much the same as his plans for calculating days from seconds.
  • Cluso99Cluso99 Posts: 18,069
    edited 2015-06-21 01:04
    Hal Albach wrote: »
    ICL was my former employer and was bought (taken over) by Fujitsu while I was there. The mini you refer to was the System 25, right? It was the offspring of a processor built by Singer called the System 10 which was used primarily for Point of Sale Data collection. Sears & Roebuck were the big contractors for that endeavor.
    Yes, I was referring to both the Friden/Singer/ICL System Ten and 25. In Australia it was used in many industries without POS. One customer had 35 local M85 Video Terminals processing Order Entry real-time. Another had over 130 remote (all over Australia) PC's emulating M85's and processing real-time Order Entry using Boards I designed that plugged into the S25 Card Cage.
    I bought my own S10 in 1977 (18 months old) which I maintained and ran until 2000. I also bought an S25 later which also ran until 2000. No pocket computers back then!
  • LeonLeon Posts: 7,620
    edited 2015-06-21 03:54
    XMOS has interrupts, but they are primarily intended for legacy code.
  • Heater.Heater. Posts: 21,230
    edited 2015-06-21 04:20
    XMOS is a good example. Multiple cores and no interrupts. Or at least that is the programming model presented if you use their preferred language XC. Everything is event driven code runs, gets to a point where it has to wait, and then proceeds when the event it waits on happens. Meanwhile other code runs at the normal speed, not even noticing when those events occur. So simple and much easier to reason about.

    I always wondered why the XMOS had hardware support for interrupts at all. I guess Leon is right, it is to support legacy C code and such. I have never seen that used.

    To my mind an interrupt is just one of those old kludgy hacks that people cling to for misguided reasons or just out of familiarity. Like the GOTO in high level languages but worse. Not only is your source code deciding when execution can jump to any random place in the application but worse still the hardware is making such jumps at any random time for you!

    As you probably know the arguments for and against GOTO went on for decades before it was finally expunged from modern high level languages like Java, JavaScript, Spin, Python etc. (Note: The timorous Java creators kept GOTO as a reserved word, they were not sure if they would have to cave in to user demand and implement it!).

    Douglas Crockford has given an number of talks about the history of computing and how new concepts are soundly rejected by practitioners for decades, they want the bad old ideas they are used to. Examples include: The rejection of high level compiled languages over assembler, The desperate fight over GOTO, interrupts, shared memory threading, slow adoption of functional programming, slow take up of immutable data structures, etc etc.
  • evanhevanh Posts: 15,919
    edited 2015-06-21 04:20
    Leon wrote: »
    XMOS has interrupts, but they are primarily intended for legacy code.

    ... for ported code from pre-existing architectures, might be a better phrase than legacy code. Legacy is usually interpreted as previous incarnations of the target.
  • evanhevanh Posts: 15,919
    edited 2015-06-21 04:32
    Heater. wrote: »
    ... Everything is event driven code runs, gets to a point where it has to wait, and then proceeds when the event it waits on happens. Meanwhile other code runs at the normal speed, not even noticing when those events occur. So simple and much easier to reason about.

    A detail in all this is event memory, an edge detect so to speak - this applies equally to interrupts. I presume XC has some sort of kernel to manage event occurrences so that when the code loops back to it's even handler it is able to see any pending events that have already been triggered.

    In the earlier CNT example the event memory is hand coded in the fact that the CNT value can be checked for passing a threshold for a long period before an overflow will occur, ie: The event is detectable after the fact.

    EDIT: Interrupt hardware comes in two forms, level detect and edge detect. With edge detect the interrupt controller manages the event memory all itself. In level detect the IRQ source hardware has to hold the IRQ active until software services it. This puts some of the event memory burden on the source hardware.
  • Heater.Heater. Posts: 21,230
    edited 2015-06-21 04:32
    evanh,

    That is a very limed interpretation of "legacy" you have there. Taken literally "legacy" is whatever some one has left to you.

    In the computing world I have always taken it to mean all that old code and hardware junk that none of the new owners of a project know anything about or even want to know anything about. They are saddled with it as the cost of redevelopment using current, supported, languages, tools, methods, techniques is just too high.

    Got to love wikipedia:

    legacy system:

    In computing, a legacy system is an old method, technology, computer system, or application program, "of, relating to, or being a previous or outdated computer system."[1] Often a pejorative term, referencing a system as "legacy" often implies that the system is out of date or in need of replacement.
  • evanhevanh Posts: 15,919
    edited 2015-06-21 04:36
    Heater. wrote: »
    That is a very limed interpretation of "legacy" you have there.

    Porting is a much clearer meaning, and certainly far less loaded in it's use.
  • Heater.Heater. Posts: 21,230
    edited 2015-06-21 06:10
    evanh,
    ...pre-existing architectures, might be a better phrase than legacy code.
    Might be. But "legacy code" is a phrase in common use. For example all that code I wrote in former lives in languages like PL/M, Coral 66, Lucol is certainly "legacy". If you have such an old system still running (and many of them are still running) you are going to have a hard time if you want to bug fix it or tweak it or even just keep in running in the face of hardware changes. The compilers are not available, nobody knows those languages any more, and if they do they probably don't want to see it again unless you have lot's of cash to wave at them :)

    Code can become "legacy" very quickly. For example there are plenty of web sites built with PHP, MySQL and JQuery. The young dudes don't want to do that any more when they have node.js, NoSQL databases and react or whatever shiny thing is current.

    "pre-existing architectures" seems far too "politically correct". It's like having to say "senior citizen" instead of "geriatric" or my local city council having a sign by the road indicating "Civic Amenity Point" when they mean "rubbish dump" or "tip".
    Porting is a much clearer meaning, and certainly far less loaded in it's use.
    "porting" is porting. That's a case of recompiling your code for a new target processor and adapting it to some new hardware interfaces or operating system API. Porting happens all the time. Such ported code is not really "legacy" it's alive and well, in use and being maintained and carried forward to new environments.

    "legacy" is what I'm talking about above.
    A detail in all this is event memory, an edge detect so to speak - this applies equally to interrupts.
    There are a few different definitions of "event driven" systems. None of them have anything in common with interrupts. The JavaScript programming model for example is event driven. An event happens, that triggers code to run, that code runs to completion and then the engine is ready to handle the next event. Incoming events are queued up by the OS/run time. It is strictly a single threaded, one thing at a time, programming model.

    More interesting for us here is the XMOS and Propeller style event model. In this model for every possible event source there is a processor waiting to run the event handler code as soon as the event happens. Minimal latency, full speed deterministic execution, no memory sharing. Who cares about edge or level trigger here? Certainly no need for interrupts in this programming model.

    XMOS cheat a bit. They advertise 8 "logical cores" per core and the top of the range devices have four cores. That is to say that have hardware instruction by instruction time-slicing going on to give the effect of 8 separate cores for each real core. Those logical cores have their own program counters, registers and stacks. Ready to continue immediately, from wherever they were halted, whenever the next event arrives.

    In both the Propeller and XMOS models there is no "kernel" managing anything.
  • evanhevanh Posts: 15,919
    edited 2015-06-21 06:16
    evanh wrote: »
    A detail in all this is event memory, an edge detect so to speak.

    Sorry, "event capture" would be a better term.

    The point I'm making here is that the shorter the pulse duration is the more processing resources that end up having to be thrown at capturing that event if only comparing an input with last state, to the extent of having to throw a whole Cog at it. I can see why some want specialised hardware to help out.

    There is the A and B Cog counters. They can effectively be used as digital input edge detectors without tying up a whole Cog in a tight loop. That gives quite a few options really.

    Maybe there is a need for documenting ways of effecting event capture on the Propeller. Is there any such documentation already?
  • kwinnkwinn Posts: 8,697
    edited 2015-06-21 06:23
    evanh wrote: »
    Hehe, even though I'd commented in that topic, I'd not read what he was ultimately trying to accomplish.

    The obvious way is to just use CNT and do a differential subtraction as part of his main loop. Each time a second or more has reached a threshold then add that many seconds to ETime. Pretty much the same as his plans for calculating days from seconds.

    That would work, and there are certainly other ways to do it as well. The point is there are circumstances where having a simple interrupt can make the overall code faster, simpler, and shorter. In this case it is a signal that comes along at a regular interval. What if it was more random or required immediate attention? How many processing cycles are used looping and testing to see when some condition is true compared to what is needed for the actual task? How much memory is taken by the looping and testing versus that required to deal with the task?
  • evanhevanh Posts: 15,919
    edited 2015-06-21 06:32
    kwinn wrote: »
    What if it was more random or required immediate attention? How many processing cycles are used looping and testing to see when some condition is true compared to what is needed for the actual task? How much memory is taken by the looping and testing versus that required to deal with the task?

    The WAIT instructions are the ultimate responsive method. Much faster than an interrupt. And smaller too. However, for many situations, responsiveness is not usually a tight requirement, just detecting the event is the most important.
  • Heater.Heater. Posts: 21,230
    edited 2015-06-21 06:41
    @evanh
    "event capture" would be a better term.

    The point I'm making here is that the shorter the pulse duration is the more processing resources that end up having to be thrown at capturing that event if only comparing an input with last state...
    You may have a good point re: capturing events. Although I'm not sure I follow what you are saying yet.

    With regard short pulses, you make me wonder how long a state has to be asserted on a pin before WAITxx on that pin will respond to it. Presumably a pico second or nano second pulse may be missed. Is this specified anywhere? Has any one measured it? What is the "hold time" on such inputs?

    Now the part I don't understand. If missing such short duration pulses is the issue in your Propeller system then no amount of code can help you. You will need some external flip-flop or whatever to make the pulse long enough to register on the pin input.

    It could be argued that Propeller inputs should have such signal capture and conditioning built in to the I/O system, the pins. Good idea, that is what XMOS does. Perhaps that is what will be in the "smart pins" of the PII.

    @kwinn,
    The point is there are circumstances where having a simple interrupt can make the overall code faster, simpler, and shorter.
    And for every one of those cases having a separate processor waiting on the event will make the latency shorter, the overall execution faster, the code simpler and shorter than having a single CPU and an interrupt system.

    That is the world I like to see. One could perhaps argue that the Propeller does not have enough cores to pull this off for all you applications. Or that only having 496 instructions available for high speed event handling is too restrictive.

    Well, that is why we are all waiting for the P2 event :)

    One could argue that having a full up 32 bit CPU spending most of it's time waiting for events is terribly wasteful. I say "So what? A CPU today is very small, very cheap, very frugle when it's not running, why not use the silcon to make development easier and more predictable?"
  • evanhevanh Posts: 15,919
    edited 2015-06-21 06:54
    Heater. wrote: »
    It could be argued that Propeller inputs should have such signal capture and conditioning built in to the I/O system, the pins. Good idea, that is what XMOS does. Perhaps that is what will be in the "smart pins" of the PII.

    There we go. Cool. Every pin? Good idea. So I guess that also goes for every hardware function that can generate events. Each one having a state change bit ... that clears upon reading?

    I don't think I've seen Chip mention such a feature as yet, but as you say Smartpins hasn't been discussed/done.
  • kwinnkwinn Posts: 8,697
    edited 2015-06-21 09:25
    Heater. wrote: »
    @evanh
    @kwinn,

    And for every one of those cases having a separate processor waiting on the event will make the latency shorter, the overall execution faster, the code simpler and shorter than having a single CPU and an interrupt system.

    That is the world I like to see. One could perhaps argue that the Propeller does not have enough cores to pull this off for all you applications. Or that only having 496 instructions available for high speed event handling is too restrictive.

    Well, that is why we are all waiting for the P2 event :)

    One could argue that having a full up 32 bit CPU spending most of it's time waiting for events is terribly wasteful. I say "So what? A CPU today is very small, very cheap, very frugle when it's not running, why not use the silcon to make development easier and more predictable?"

    All well and good as long as there are enough CPU's, and I have no problem with CPU's waiting for events or even sitting idle. Most of my projects rarely use more than 3 or 4 cogs.

    Call it an interrupt or a hardware initiated thread or an event capture, but adding the hardware for switching the execution path to every cog would take up much less space than adding a single cpu.
  • kwinnkwinn Posts: 8,697
    edited 2015-06-21 09:46
    RE: "event capture/edge detection"

    The "waitpne" instruction in the P1 is already a form of event capture, and similar hardware could be used for a simple interrupt/thread switch/event capture. Whether that hardware is in the smart pin or the cpu does not really matter.
Sign In or Register to comment.