Shop OBEX P1 Docs P2 Docs Learn Events
All about interrupts — Parallax Forums

All about interrupts

Comments

  • Heater.Heater. Posts: 21,230
    Excellent. Can't wait for the "bad and ugly" part.
  • Heater.Heater. Posts: 21,230
    As one of the commenters there says:

    "One of the most common questions is: Can I use A, B, and C with an Uno/Due/Whatever? The answer invariably is yes, you have enough pins to do that, but no, because the libraries used to support those devices are using conflicting timers. One change I think (my opinion of course) would help improve the Arduino ecosystem is to bring the timers and interrupt configuration out of the backend and more into the forefront so that hobbyists can be more conscious about how libraries utilize the resources of the microcontroller, because they are a constraint just as much as only having X number of digital pins or Y number of analog pins."

    Which is why I always argued that interrupts were horrible when it's desirable to mix code objects form different places into your code and the Prop is awesome for totally avoiding that issue.
  • Or... if you are going to be a P2 person at some point.

    The P2 has interrupts.

    I think we are going to find it's still awesome for mixing code objects in the majority of cases.

    Turns out, having interrupts local to the COG isn't too much different from having tasks local to the COG. And most interrupt cases are local to the COG, in that using the same one on different COGS should still work like COGS work, meaning one COG won't be impacting another one.

    It does appear possible to engangle things, but that's possible on a P1 too, but to a lesser degree.

    We shall see soon. Hoping for a good outcome personally.

  • Heater.Heater. Posts: 21,230
    Normally I'd being saying that the suggestion that a Propeller have interrupts is heretical and blasphemous. But as this development in the P2 comes from the Creator himself I don't have a leg to stand on.

    As far as I can tell you are correct, interrupts on the P II are local to a COG and will be local to Spin objects (C++ classes, whatever). That is to say I have yet to see how code running on one COG can ever know about or be affected by the use of interrupts on an other COG.

    That means that my object A that does all kind of weird stuff, with or without interrupts can be mixed into a program with your object B that does the same and they will both work as well as they ever did.

    Except....I can see odd cases where:

    a) An object may not actually run in a COG of it's own, it's just code that is called from some other object.

    b) It has sequences of statements that are time critical, bit banging a pin to some device say.

    c) Then interrupts used in the calling object may upset things for the sub-object.

    Or vice versa, interrupts set up in a sub object may upset a client object.

    This is dangerous ground for the future of building programs by mixing and matching objects that I have argued against since forever.

    I love your new word there, "engangle". It's like making a gang of out of a bunch of things in a tangled way :)








  • Engangle is a typo, but I like it too.

    Set interrupts aside. HUBEXEC opens that messy door anyway, though not to the same degree.

    I submit that providing hardware and assembly / SPIN language support for hub executable code will bring complexity avoided by the more rigid P1 programming model no matter what.

    The better use case on that is very likely to be "put reuse type objects into COGS" and when they get bigger than COGS, the default is to manage that in the HUB.

    Having that option is something everybody wants, and the cost will be less reuse on larger things.

    Maybe that's OK?

    The bigger / more complex things get, maybe it's just less practical to reuse with the simple model we have become used to on P1.

    No matter what, it's gonna play out, and I think it needs to play out. Some aspects of the Propeller have been generalized. In particular, WAITVID is gone. Most of us, if not all of us, wanted that generalization too.

    What I found interesting is the "hot" chip adhered a lot more closely to the P1 model, and it took a lot to get there. Too much. 5 watts in a BGA! Some of us thought it worth doing at a lower clock, but others wanted the higher clock to put more time precise tasks within reach.

    Adding interrupts brought the more generalized COGS up to some very good capacity to get things done, and it was either that, or the tasker, or nothing.

    Nothing seems not enough. We don't have a spec. It's just not enough. Like we know it can do more.

    Doing it with interrupts takes less power. I think getting a lot of things done on the process we are targeting has clear limits that the "hot" chip bumped into hard.

    But, we kind of want to get those things done.

    Some of us want a faster P1 with more COGS and more HUB RAM. That too, seems not enough.

    But, this chip basically does deliver that. And some of us made the argument about "you don't have to use them"

    Others countered with, "we won't get the body of objects without this being enforced"

    To which, I personally agreed, until I really started thinking about HUBEXEC and what it's going to mean, if it runs at any respectable speed.

    Further complicating things was the very ugly discussion on HUB throughput. Almost nobody agreed, with almost everyone either holding out for the simple P1 scheme, or advocating some dynamic allocation type scheme. There were tons of them! Some simple, some complex. Heater, you even proposed a random one! (which was very interesting)

    So we got the eggbeater, which shifts the discussion some. Instead of instruction / cycle counting for the HUB, we can plan on minimums being possible. And for some cases, we get a lot more throughput. Nobody got what they wanted.

    And I think that's a good move, particularly when we also got a lot more COGS.

    The way I see it, we can treat a P2 like a faster, more roomy P1 with very few forced differences. We can also treat a P2 as an entirely different beast!

    Rather than pontificate on this vs that, it's all gonna compete!

    The outcome might be:

    1. We get P1 style reuse, and it's gonna work, because doing it that way is all still possible.

    2. We get enhanced P1 style reuse, with interrupts kept local to the COGS. (there are cases with pins, and the debug one that can influence other COGS, but the shared pin case does that anyway)

    For both of these, the combine objects with a minimum of fuss stuff we love is all still going to happen, and it's going to be possible to combine more objects and do one heck of a lot more with them too. I expect this to be a very good thing.

    3. We get "applications" that are blobs. Take 'em or leave 'em. Ripping pieces out of these will be no less of a PITA, and could be more of a PITA than doing the same thing on a P1 is. And they will be bigger too. More can be done, and it can be done faster too.

    I'm intrigued by this one actually. Building bigger things that feature a lot of things going on at one time, video, I/O, math, etc... might actually be a winner! I care less about reuse in this case. Some people are going to just make a big thing, and it is what it is. Others may actually do a hybrid, objects as we know them, and other code that is pretty dedicated. Whatever happens here will be interesting.

    4. We get "chunks" and those are targeted for the HUB, and they will be a lot more like libraries than the COG objects we are used to. Nothing special here, other than we will be doing it now.

    To sum up, the Prop will be flexible! One can apply a few different ideologies and strategies. Maybe this is a good thing. It won't be the unified environment we are used to on the P1, and reuse might not reach the same levels, though it could.

    But, that case 3 is kind of compelling. It's an interesting space where someone would think about a Pi, and Linux, etc... If they do it on a P2, maybe they won't have to futz with Linux, use some libraries, a few objects and end up with a pretty cool thing for being on a micro-controller.

    There is a similar thing going on in mechanical CAD right now. A lot of programs focus on the core modeling strategy, which is history based, parametric. That's powerful stuff, and when it's applied in a pure way, capable of a lot. But it's a lot of work, and it can be inflexible.

    Some CAD systems have gone ahead and supported many different ways of modeling parts. I've taught this stuff for years, and the most interesting thing about it all is the choices people have. Yes, they can make one hell of a mess, but they can really nail some problem cases that are difficult to do in the "pure" way too.

    Turns out, consumer product design differs very considerably from say, Aero and Auto design! And those differences center in on being able to employ a variety of approaches to problems.

    More interesting is the fact that groups who take the time to learn the various ways things actually can be done gain a very significant advantage too!

    So maybe, just maybe, a flexible P2 is a damn good thing. If it plays out like the CAD wars have, it's gonna be a good thing.

    And it's going to create a nice vacuum for educators willing and able to show off the possibilities. No more wars. The P2 does it this way, that way, any way!

    Instead of wars and limits, etc... It's all about learning how things can be done and then doing them in ways that align with the project and its needs.

    Doing that may well exceed the benefit of more strict reuse type approaches.

    But, doing that where it's not an advantage will definitely be a waste and a more intense focus on P1 style reuse would be better indicated.

    People won't know which is which at first. So there will be a mess. But it might not remain a mess, and that's what I hope and expect to see over some time.



  • potatoheadpotatohead Posts: 10,261
    edited 2015-09-19 08:36
    One fear I had, and still do harbor, is the elegant P1 style of doing things will get snuffed out by the mess now possible.

    I think that, to a degree, is gonna happen, but I also think we spent so long doing so damn many killer things on P1, that we will have answers for people who get into a mess that isn't working for them too.

    And P1 isn't going away either.

    Back to mechanical CAD for a moment. I've ran just about every system you can name. I've done it old school, wireframe geometry lists, all the way through boolean solids, history based parametric, variational (simultaneous solve) parametric, direct on geometry non history based, and so on...

    When I'm faced with difficult modeling problems, legacy or crappy data, etc... I find I can apply those techniques, some old school, some very new school and many traditional to get stuff done quick. And I can do that and deal with very aggressive design changes too!

    That was not possible on the older systems, and the traditional ones offered limited improvement for the aggressive change case, though they changed the world in the moderate or planned or catalog part change cases.

    Today, I support and run a system that does it all. And I wouldn't have it any other way. When I'm on a system that is more ideologically pure, I hate it. I hate it, because I can't get stuff done as quickly or cleanly (at times) as I can otherwise.

    Back to software then:

    Here I don't have as much skill, and that's just due to my own life experiences and focus, but I'm the student in many ways. Fine! That's why I enjoy this stuff.

    I'm good on a P1, and I got that way, because a P1 gives a person very little choice! And that's still true, and there still are and still will be P1 chips for people to have the experience I did, and that experience was being able to jump on the thing, grab objects and do stuff that was more difficult and often more time consuming otherwise!

    In the CAD world, I train people and I often do that for some big names you would definitely recognize too. And I've seen most of it play out, which is why they call me. All good. I can see what they are faced with and deliver means, methods and strategies that nail it for them.

    In the P2 world, would this not be a great thing?

    I hope it is, because I think that is what we are about to see happen. It means people like me are going to learn some new tricks and it will be very good for us to do. It also means people are going to make some messes too. Oh well, or they can try a P1, or get help, etc... too.

    An ideologically flexible P2 is still going to be magic when we treat it right, and enough of us know how to do that for it to shine and for some good reuse to happen.

    But, that same flexibility is also going to mean nailing some use cases cold too! And we don't know where those are yet, or what niches they will occupy, but I very strongly suspect they are there.

    And enough of us have "seen it all" to maximize those in a way similar to what I'm describing with CAD too.

    Now, my last point!

    As an educator, again the CAD system I prefer, can do it so many ways, it's great! I can teach any technique I've ever learned on the thing. Doesn't mean I do that, but I can. And it means when I need to, I can. (very important)

    For Parallax education, this might just be a goldmine. No joke.

    For people who can teach this stuff from a body of experience, it might also be a gold mine.

    Sorry if some of the CAD terminology doesn't make sense. Ask me, and I'll expand or clarify, maybe in general discussion or something.

    But I'm hoping enough of it does to make the parallels --or should I say, potential parallels clear.

    If nothing else, keep an open mind. I think we need that right now. We think we know stuff, but really we might not know as much as we think when it comes to potential use cases, features and benefits and dynamics surrounding this thing.

    One thing I know for sure is we have a set of brains here who have seen and done a lot! Just the thing needed to maximize this chip that Chip is making.

    And he wants something powerful, educational, fun, etc... I think he's nailing those design goals personally, and I'm trying hard here to explain part of why that might be true.
  • Heater.Heater. Posts: 21,230
    Potatohead,
    HUBEXEC opens that messy door anyway, though not to the same degree....I submit that providing hardware and assembly / SPIN language support for hub executable code will bring complexity avoided by the more rigid P1 programming model no matter what.
    Could you explain how that is so? Or given an example of some kind?

    As far as I can tell HUB exec is logically the same as the current use an LMM loop or even an interpreter like Spin. Only sped up by being done in hardware.

    I'm guessing the new "eggbeater" HUB access adds to the indeterminism of the execution rate of HUB exec code. But I don't see that anyone uses the instruction cycle counting style of timing things with HUB code. There may be a slight question over whether a fully loaded P 2 can slow the peak performance of HUB exec for a COG.
    The bigger / more complex things get, maybe it's just less practical to reuse with the simple model we have become used to on P1.
    When programs get bigger that's when you need modularity. Is it not ?

    Your general thrust is that having more ways to the same thing is better. I'm not buying that idea.

    I'll site the current WEB development world as a case in point. If you want to build a complex GUI in a web app today there are dozens of libraries and techniques available to help. You can suffer severe "paralysis of choice" just evaluating them and deciding which way to go. Having decided you may have a harder time getting new developers on board and up to seed who are probably fluent in some other system.

    This kind of choice is just redundant complexity making it harder for everyone to understand each other. It's enganglement :)


  • Cluso99Cluso99 Posts: 18,069
    You missed that cog programs can now be twice the size (with a few restrictions), and we can do really fast overlay loading too!
  • potatoheadpotatohead Posts: 10,261
    edited 2015-09-20 22:03
    What I was thinking is having shared code in HUB that multiple COGS will execute. COGS start in HUBEX mode now, so there isn't that isolation anymore. Should that code get corrupt, all COGS needing it will go astray.

    None of this is bad, IMHO. Just different. I think we will use the ability to take a pool of COGS and throw them at a problem with shared code, each of them running the problem, re-entering it as needed, for example.

    We did do LMM, and it's largely the same, but did people do a lot of concurrent LMM? I'm not sure I saw the lots of COGS all running code in the HUB case very much, if at all.

    Re: Modularity: Seems to me, there is a difference between having things be modular and having those modules be something appropriate for reuse. Ideally, it's both. Being able to build bigger things without an OS may pay off without also generating object type reuse, that's all.

    You might be right! There may just be a big mess.

    And I guess I'm saying a mess can be made sans interrupts, and the overall capability / usability of the COGS is very significantly improved by having them. Worth it.

    I'm hoping the fact that there are 16 COGS means we avoid the ugly interrupt handler kernel type problems, or where they must exist, they can be compartmentalized into an object where they can be reused without as much hassle as they would otherwise.

    I'm eager to see how SPIN + PASM turns out too. Last discussion included things like in-line assembly. And I'm a fan of that for a lot of reasons. But it's a mess in the making too...

    I'm also saying the P1 style, highly reusable, object style of doing things is there, largely unchanged. It may well compete nicely with the other stuff. I think it will.

    Again, in CAD, those same arguments played out. Ease of use and part / geometry reuse were both cited as the big reasons for sticking to the more rigid modeling strategies.

    For new users, those are good arguments. Once they reach some level of proficiency, those arguments lose their value. Users want options.

    P2 may show this exact same dynamic, and if it does, the "but we need it to be possible" people will have been right about it. :)

    Which means those of us "keep it constrained" people would not be so right about it.

  • Heater.Heater. Posts: 21,230
    potatohead,
    Should that code get corrupt [the HUBEXEC code], all COGS needing it will go astray.
    Good point.

    I'd say that we have that problem in the P1 anyway. Any code doing anything useful is working with some shared HUB RAM, if that gets corrupted things fail.

    Besides, most code is Spin byte codes, or LMM for C/C++, or other byte code system, so we can already have code corruption occurring between objects.

    I don't see that hardware execution, HUBEXEC, makes any difference to those problems.

  • evanhevanh Posts: 15,918
    potatohead wrote: »
    What I found interesting is the "hot" chip adhered a lot more closely to the P1 model, and it took a lot to get there. Too much. 5 watts in a BGA! Some of us thought it worth doing at a lower clock, but others wanted the higher clock to put more time precise tasks within reach.

    Adding interrupts brought the more generalized COGS up to some very good capacity to get things done, and it was either that, or the tasker, or nothing.

    Nothing seems not enough. We don't have a spec. It's just not enough. Like we know it can do more.

    Doing it with interrupts takes less power. I think getting a lot of things done on the process we are targeting has clear limits that the "hot" chip bumped into hard.

    Oi! Correlation is not proof of causation. You know that! You are rather directly implying the Hot part came about because of the threading model. If you want to blame something related then maybe HubExec but even that was only indirect because of the wide buses and attempts at fancy single cycle timing.

    The Prop2Hot was hot long before threading was added. Time to stop implying a causation, thank you very much.
  • evanhevanh Posts: 15,918
    edited 2015-09-21 06:18
    Adding interrupts was easier to meld with HubExec. The threads were easy too, providing they were limited to their proposed Cog space. But Chip got a bit ambitious trying to make them completely generic within the new HubExec model. That cost a little extra real-estate but mostly it was just the engineering effort.

    HubExec still has a ways to go me thinks. For any Prop3, we can probably count on a full functional instruction cache per Cog.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-09-21 06:40
    No I'm not actually.

    We see the "hot" chip came about due to a lot of things, and perhaps should have been more clear. The "hot" one didn't have signal gating, it had massive busses, and in a real basic sense, tons of transistors toggling a lot. Not enough consideration given to power issues.

    This current one is a lot simpler, and it has gating, doesn't have the massive busses, and in a basic sense, just doesn't have so many transistors toggling all the time every cycle. A lot more consideration given to power issues.

    What I noticed is without either a tasker or interrupts, this design seemed to fall short, like we just aren't maximizing the process physics to best capability / features. Some things done in a single COG before might have taken a few of them, or two, whatever...

    I also noticed how the addition of the interrupt events, once they got simplified, didn't seem to take all that much and they appear to offer many opportunities to perform tasks that do not require the device to be running the whole time, and that made me think about power and the overall differences in how the two were designed.

    Besides, they or the tasker are needed to do what WAITVID did in P1, and because things are more generalized, we also get a lot of other cases people wanted that WAITVID didn't address too. And it might actually be simpler to implement what WAITVID did with an interrupt as opposed to a tasker and some way to poll and check the streamer, etc... That would depend on the tasker, which we don't have obviously...

    I am suggesting those people here who did suggest an interrupt capability would equate to better overall ability to manage power might be right about it, and that is the part I had bouncing around and just didn't put out clearly. The hot chip did a lot more in hardware. This one does more in software.

    Maybe when we get the P2 FPGA code, a tasker can be dropped in as an exercise and some cases can be compared. Probably won't happen, because it's not trivial, but if it does, I think that exercise would tell us a lot.

    I have genuine wonder about the tasker approach being simpler or more efficient, etc... What we got seems pretty clean and very usable, and like the tasker, the interrupts are local to the COG, meaning the COG is still going to be the basic unit of reuse "object" style, like we are used to on P1. Either way, I'm happy with that.
  • evanhevanh Posts: 15,918
    The threads were trivial. Chip put them in in a few hours flat. It all worked with no further fixes needed.
Sign In or Register to comment.