Shop OBEX P1 Docs P2 Docs Learn Events
Propeller II update - BLOG - Page 195 — Parallax Forums

Propeller II update - BLOG

1192193195197198223

Comments

  • jmgjmg Posts: 15,173
    edited 2014-03-04 18:37
    cgracey wrote: »
    There is no static holding register, just a 2-bit wide variable-length loop of flops.

    So there is no read at the moment ?

    If all threads could read, as well as write, SETTASK values, that should give what you wanted in #5810, and also allow more ?

    Threads could check what level of resource they are presently being given, at any time.

    A thread may choose to modify what it does, based on that allocation.

    A readable SETTASK value, would also help debug.
    The thing about looking at or tweaking the task order is that it could change at any time by some other task (maybe a scheduler?) doing a SETTASK. It's kind of like worrying about what cogs are running, because it's very fluid. If you start or stop a cog, though, that is a definitive action that will have a certain effect.

    I think in #5810 you were after a readable flag, showing if someone gave a thread 100% ?

    What about a change from 1/16 to 15/16 - that's too much detail for a single flag, but quite visible with a read of SETTASK value.

    ( or from <1/16 to >15/16, if thread weighting is added)
  • Roy ElthamRoy Eltham Posts: 3,000
    edited 2014-03-04 19:06
    cgracey wrote: »
    That's a really neat way to handle it. You could do the CALL with the byte codes directly following, so that the routine called pops the address and starts interpreting the bytes, until it hits a return byte code, at which point it returns to the next long. Like you implied, PASM could be the default mode for Spin.

    The nice thing about it being a buffer you point to instead, is that you can easily reuse the same buffer (or portion of a buffer) in multiple places without growing the code size. Plus the buffer can be "anywhere" we code for them to be.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-03-04 19:06
    Nice!

    How about an unstruction to see if a task is running:

    TASKSTAT D

    Exposes "which tasks have at least 1/16 cylces enabled" as

    %3210

    - bit 0 means task 0 is running (has at least 1/16 cyles enabled)
    - bit 1 means task 1 is running (has at least 1/16 cyles enabled)
    - bit 2 means task 2 is running (has at least 1/16 cyles enabled)
    - bit 4 means task 3 is running (has at least 1/16 cyles enabled)

    Then the scheduler can tell with a simple bit test if other tasks are running

    alternative:

    ISRUNNING #0..3 wz

    cgracey wrote: »
    I figured out a way a thread can invoke a yield that a scheduler can detect:

    The thread executes SETTASK #0, turning all the time slots back over to the scheduler, which otherwise is only getting every 16th slot. The scheduler can execute:
    GETCNT  time
    SUBCNT  time
    CMP     time,#1      wz
    

    If Z then a yield occurred. A couple instructions would have executed after the yield (SETTASK #0) in the thread, though.

    I could add a simple instruction that would set Z if single-task mode was enabled. That would be simpler.
  • roglohrogloh Posts: 5,787
    edited 2014-03-04 19:26
    cgracey wrote: »
    This is a mind bender because INDA/INDB incrementing/decrementing takes place two instructions before execution occurs. What if you just had a register that the producer increments and the consumer decrements? If it's 0, there's nothing to pull out. Also, keep in mind that INDA/INDB-using instructions cannot be conditional, so if you don't want them to execute, you need to branch around them.

    I was thinking of this whole issue, too, this morning. The only hub read instruction that takes one clock is RDWIDE, since it doesn't return a value to a D register. I've thought for a while that it would be good to somehow decouple timing of hub accesses from instructions, but trying to do so creates a wall of problems.

    Ok, got the issue with INDA/INDB. I think the counter register approach you've mentioned has merit but I am suspecting the pipeline read/modify/write delay of a clock cycle when you increment a register could affect the next task reading it.

    For example what if you do a ADD D, #1 in one instruction of a task incrementing a register and the very next cycle do a SUB D, #1 in another task decrementing the same destination register? What is the final value after both actions? Is it possible it might be 1 lower than the original value because the second task reads the pre-incremented value in its 3rd stage before the first task has written its result back at the end of its 4th stage, or will it always work and be the original register value as expected? Pipelining always confuses me a bit as it depends on all the internals.
    3rd stage    - Read D and S from cog register RAM
    
      4th stage    - Execute instruction using D and S
                   - Write any D result to cog register RAM
                   - Update Z/C/PC and any other results
    
    If this problem is indeed the case and a TLOCK is required to resolve it, that won't work for keeping the hub access task aligned on hub windows. The only other idea I had is that you just maintain a single data transfer register between tasks (FIFO size is essentially 1) and a separate data ready flag which is set by the writer using MOV D, #1 and polled and eventually cleared by the reader with MOV D, #0. This also requires the reader consuming the data and clearing the flag before the next write from the other task occurs - I think it still sort of works but doesn't allow any backlog in the queue unfortunately. Hopefully it might still be usable with this limitation.

    As to the RDWIDE I had the same idea as you were thinking. To maintain the one cycle execution in the hub access task during data reads we would have to use RDWIDE's only and then copy the bytes (or longs, whatever I/O prefers) from the WIDE into the queue for transferring to the I/O task. Its a bit trickier but that still may be doable I hope.

    Note: to get some idea of where I am going with all this here using the USB application I had previously mentioned as a good example, if we had 2 tasks and a 1 in 8 scheduling ratio for a memory task, a USB I/O driver COG running on a 192MHz clocked P2 (which is a very nice sweet spot for FS USB by the way) we would get 14 instructions per bit in the I/O task AND 16 instructions per byte for the memory access task overlapping simultaneously for processing all incoming/outgoing data. That may potentially be enough for even doing CRC16 in software on the fly during DATA streaming to/from hub RAM (which I know only takes 5 cycles per byte of the 16 budget with a stack RAM CRC LUT approach). [That little CRC5 in USB is another story altogether, not sure if that is feasible doing it bitwise on the incoming data in the limited cycles we have but didn't look at it in any detail.]

    Update: Note if the CRC5 can also be done using table lookup (TBD), we have room in the stack RAM for both CRC16 and CRC5 LUTs to exist simultaneously in the 256 long RAM. This could be very nice indeed.

    Roger.
  • jmgjmg Posts: 15,173
    edited 2014-03-04 19:48
    How about an unstruction to see if a task is running:

    TASKSTAT D

    Exposes "which tasks have at least 1/16 cylces enabled" as

    %3210

    - bit 0 means task 0 is running (has at least 1/16 cyles enabled)
    - bit 1 means task 1 is running (has at least 1/16 cyles enabled)
    - bit 2 means task 2 is running (has at least 1/16 cyles enabled)
    - bit 4 means task 3 is running (has at least 1/16 cyles enabled)

    Then the scheduler can tell with a simple bit test if other tasks are running

    Or, you could read SETTASK values, as I suggested above, and pick up exactly how many cycles your Thread has.
    ( not just a coarse > 1/16)

    ok, It's not decoded, but likely does have a few known patterns to choose from.
    ( and this read SETTASK value ability will be important for Debug anyway )
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-03-04 20:41
    We should have the decoded version as well, as it would take a fair number of cycles to decode it due to how flexible it is.
    jmg wrote: »
    Or, you could read SETTASK values, as I suggested above, and pick up exactly how many cycles your Thread has.
    ( not just a coarse > 1/16)

    ok, It's not decoded, but likely does have a few known patterns to choose from.
    ( and this read SETTASK value ability will be important for Debug anyway )
  • cgraceycgracey Posts: 14,152
    edited 2014-03-04 20:51
    rogloh wrote: »
    Ok, got the issue with INDA/INDB. I think the counter register approach you've mentioned has merit but I am suspecting the pipeline read/modify/write delay of a clock cycle when you increment a register could affect the next task reading it.

    For example what if you do a ADD D, #1 in one instruction of a task incrementing a register and the very next cycle do a SUB D, #1 in another task decrementing the same destination register? What is the final value after both actions? Is it possible it might be 1 lower than the original value because the second task reads the pre-incremented value in its 3rd stage before the first task has written its result back at the end of its 4th stage, or will it always work and be the original register value as expected? Pipelining always confuses me a bit as it depends on all the internals.
    3rd stage    - Read D and S from cog register RAM
    
      4th stage    - Execute instruction using D and S
                   - Write any D result to cog register RAM
                   - Update Z/C/PC and any other results
    
    If this problem is indeed the case and a TLOCK is required to resolve it, that won't work for keeping the hub access task aligned on hub windows. The only other idea I had is that you just maintain a single data transfer register between tasks (FIFO size is essentially 1) and a separate data ready flag which is set by the writer using MOV D, #1 and polled and eventually cleared by the reader with MOV D, #0. This also requires the reader consuming the data and clearing the flag before the next write from the other task occurs - I think it still sort of works but doesn't allow any backlog in the queue unfortunately. Hopefully it might still be usable with this limitation.

    As to the RDWIDE I had the same idea as you were thinking. To maintain the one cycle execution in the hub access task during data reads we would have to use RDWIDE's only and then copy the bytes (or longs, whatever I/O prefers) from the WIDE into the queue for transferring to the I/O task. Its a bit trickier but that still may be doable I hope.

    Note: to get some idea of where I am going with all this here using the USB application I had previously mentioned as a good example, if we had 2 tasks and a 1 in 8 scheduling ratio for a memory task, a USB I/O driver COG running on a 192MHz clocked P2 (which is a very nice sweet spot for FS USB by the way) we would get 14 instructions per bit in the I/O task AND 16 instructions per byte for the memory access task overlapping simultaneously for processing all incoming/outgoing data. That may potentially be enough for even doing CRC16 in software on the fly during DATA streaming to/from hub RAM (which I know only takes 5 cycles per byte of the 16 budget with a stack RAM CRC LUT approach). [That little CRC5 in USB is another story altogether, not sure if that is feasible doing it bitwise on the incoming data in the limited cycles we have but didn't look at it in any detail.]

    Update: Note if the CRC5 can also be done using table lookup (TBD), we have room in the stack RAM for both CRC16 and CRC5 LUTs to exist simultaneously in the 256 long RAM. This could be very nice indeed.

    Roger.


    The pipeline has data-forwarding circuitry to keep register values up-to-date, so it doesn't matter if one task modifies a register and another task uses that register's value in the next instruction in the pipeline. Everything will be current. All that is a function of the pipeline. You can push any order of tasks' instructions into the pipeline and all register values will track properly.

    That's a neat idea about a 1/8th task getting the hub slot every time, so that it can be the one to issue WRxxxx and RDWIDE commands.
  • jmgjmg Posts: 15,173
    edited 2014-03-04 20:51
    We should have the decoded version as well, as it would take a fair number of cycles to decode it due to how flexible it is.

    Yes, decoded too would be nice, but if doing that, one may as well expand to encode 4 bits per task to convey all the information in a 4 x 4 bit read value.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-03-04 21:41
    How do you deal with the fact that reading SETTASK basically returns invalid information? It's only good info, until it isn't kind of thing.
  • jmgjmg Posts: 15,173
    edited 2014-03-04 21:54
    potatohead wrote: »
    How do you deal with the fact that reading SETTASK basically returns invalid information? It's only good info, until it isn't kind of thing.

    I'm not sure what you mean.

    Chip said in #5821 that there is no read-back yet, so the suggestions are for being able to read back the last-set value (not the dynamic shifting value).

    That read can be either a copy of what was written (simplest RAM equivalent behaviour model) and/or some encoded version of the Task Map, that gives time slices per thread.
    A 4 bit encode per Thread gives full resolution read-back, easy to use, but does have a small logic cost.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-03-04 23:08
    Yes, and the moment you read it, you don't know whether or not it's valid. Good, until it isn't. :)

    Well, it might be good in the case of a TLOCK, where nothing else is going on, but in multi-tasking mode, it's not good, because the value read might not have anything to do with the true state at the time of action. In the TLOCK case, a soft copy, ideally modified to the new task intent, would be written to the register anyway. Tasks interested in their state would simply query that copy, or communicate / understand state based on whatever the software setup is for doing that.

    Unless we do something like a supervisor mode, or use latch type logic to designate a task as the owner, locking the others out from the register, reading it makes little sense.
  • roglohrogloh Posts: 5,787
    edited 2014-03-04 23:40
    cgracey wrote: »
    The pipeline has data-forwarding circuitry to keep register values up-to-date, so it doesn't matter if one task modifies a register and another task uses that register's value in the next instruction in the pipeline. Everything will be current. All that is a function of the pipeline. You can push any order of tasks' instructions into the pipeline and all register values will track properly.

    This is great news and I should have obviously realized it must be already possible to do that otherwise single tasks would likely have the exact same issue when you update COG registers back to back.

    So I think now we can now basically support INDA/INDB wrapping fifo queue sizes greater than 1 which is great for being able to accumulate a temporary burst of data if you can catch up later when you read it out for sending to the hub. That helps relax timing a bit.

    I imagine then the consumer task waiting for new data can just be doing a "JZ D, S" back to itself polling the fifo counter register value which is a quick way to detect presence of available data in the fifo, and also, being run at the 1/8 CPU rate, all jumps in this task should only effectively take 1 cycle.

    Cool.
  • Cluso99Cluso99 Posts: 18,069
    edited 2014-03-04 23:46
    jmg wrote: »
    I'm not sure what you mean.

    Chip said in #5821 that there is no read-back yet, so the suggestions are for being able to read back the last-set value (not the dynamic shifting value).

    That read can be either a copy of what was written (simplest RAM equivalent behaviour model) and/or some encoded version of the Task Map, that gives time slices per thread.
    A 4 bit encode per Thread gives full resolution read-back, easy to use, but does have a small logic cost.
    Why not keep a copy of the bits when using SETTASK ? Surely we don't need an instruction for this???
  • cgraceycgracey Posts: 14,152
    edited 2014-03-05 00:09
    Cluso99 wrote: »
    Why not keep a copy of the bits when using SETTASK ? Surely we don't need an instruction for this???

    That's how I would approach it - at the application level, where there is common understanding among tasks about the usage plan.
  • Heater.Heater. Posts: 21,230
    edited 2014-03-05 01:08
    Bill,
    How about an unstruction to ....
    I think we need a lot more than one "unstruction".
    We have far to many instructions already, a bunch of unstructions would cancel some out and get us back to something more manageable :)

    No, I don't want to take an axe to the PII. But sometimes clever pruning gets you more fruit.
  • jmgjmg Posts: 15,173
    edited 2014-03-05 01:25
    Cluso99 wrote: »
    Why not keep a copy of the bits when using SETTASK ?

    Exactly my original suggestion.
  • Cluso99Cluso99 Posts: 18,069
    edited 2014-03-05 01:46
    jmg wrote: »
    Exactly my original suggestion.
    I meant in software! Perhaps I did not word it well but I think Chip understood what I meant.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-03-05 05:42
    Ah good! You caught my typo!

    The question was how to figure out if a task has yielded its own slices out of existence.

    SETTASK divides a long into 16 two bit fields, which can hold the two bit values in any order. Determining if a given task is running will take many cycles, as the long has to be decoded. Thus greatly slowing the scheduler, which normally runs at 1/16.

    I proposed the new "unstruction" to let the scheduler simply determine which tasks are running (at any fraction of the cycles); I don't really care if it is used - what I care about is not having to spend many cycles to decode the settask long (which is not currently readable) to figure out if a task running a thread has yielded itself into not executing.
    Heater. wrote: »
    Bill,

    I think we need a lot more than one "unstruction".
    We have far to many instructions already, a bunch of unstructions would cancel some out and get us back to something more manageable :)

    No, I don't want to take an axe to the PII. But sometimes clever pruning gets you more fruit.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-03-05 05:56
    Code challenge:

    Show the minimum cycles of code to decode a read-back of SETTASK's long into four flags, showing which tasks are active.

    The long consists of 16 two bit fields %bb that encode the task number. Must take into account fewer than 16 slots being defined.

    The code should show all the code needed for decoding the running task states, and include a worst-case cycle count.
  • mindrobotsmindrobots Posts: 6,506
    edited 2014-03-05 06:03
    Once you have a scheduler model, you need to push a lot of this work back into the scheduler code and you have to have agreements with threads as to what they can and can't do.

    For example, a thread can't be allowed to do a SETTASK because then it can usurp the scheduler and break the whole model.

    A thread needs a way to yield and also a way to terminate so the scheduler can clean up anything it needs to as a result of the thread going away. There needs to be a way for a thread to ask the scheduler to do something for it. In the case of the propeller, that a software request (as opposed to an interrupt that triggers the scheduler to wake up and look for work).

    A scheduler is always looking for something to turn control of the worker task over to. That's its job, to keep the task (in our case) as busy as possible doing user work. A thread yields for some reason, sometimes just as a courtesy, but unless it has asked for a mandatory time period before it is considered a candidate thread, it could be put right back on the air by the scheduler if there are no other "ready" threads. This is just the scheduler doing its job.

    The scheduler is going to establish the task model. It should know which thread(s) are currently running, which are "ready" and which are "waiting". If it only has one task to keep busy, this is easy, if it has multiple multi-threading tasks, this becomes more work.

    The hardware needs to provide tools for the scheduler to do this job, which I think we are pretty close to having. If the hardware dictates too much about the scheduler/multi-thread model, it becomes rigid and you lose the flexibility to explore this interesting subject and offer multiple scheduler models.
  • Kerry SKerry S Posts: 163
    edited 2014-03-05 06:22
    I agree with mindrobots.

    We need just enough hardware support to be able to make smart, flexible, scheduler programs.

    Just because the scheduler can swap out a task does not mean it has to. It can communicate with the tasks under supervision with flags and if one enters a critical phase it can easily set a "do not disturb" flag to let the scheduler know to let it have another block of time. The scheduler can also be smart enough to now how long the max do not disturb time should be and if a task goes over that then we probably have a hung task and it can swap it out and possibly do other things to recover.

    Remember the main reason for the preemptive multitasking feature is flexibility to use all of the cogs power most effectively. That is going to require smart schedulers, tasks and task/cog allocation.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-03-05 06:33
    Rick & Kerry,

    100% agreed.

    I also think at this point, all we need is a bit of hardware support for a task (or a thread within a task) yielding/exiting.

    The simplest scheme I can think of only takes one instruction:

    YIELD dest, #code ' code must be non-zero ie 1..511

    This instruction would do the following:

    - write "code" to the "dest" register
    - loop to itself

    A nice simple mechanism, that allows a lot of possibilities:

    - the scheduler can monitor dest by "tjnz dest,#handler"
    - the handler can take an action based on the contents of dest

    Possible actions are: (numbers are arbitrary)

    - if dest==$0FF, shut down task (ie exit())

    - if dest = {1..128} wait for specified event or timeout (ie select()) - note exact mechanism would be software defined

    - if dest = {129..254} we have a breakpoint 1..127, exit is just a breakpoint that never returns

    - if dest = {$100..$1FF} perform a system software function {ie getch, putch, puts etc}

    The nice thing is that all of the above - with the exception of "YIELD" (but perhaps SYSTEM is a better name) instruction is software defined.

    After YIELD is starved by the scheduler, when that task/thread gets cycles again, it can resume on the next instruction (by adding 1 to the saved PC)

    This is far simpler to understand, provides far more capability, and takes far fewer cycles (and less code) than decoding a task long.

    And dare I say it... having this simple instruction is in the "propeller way" - simple hardware, push complexity off to software.

    NOTE:

    This could be done as

    mov dest,#code
    jmp #self

    but given how often this would occur in code, it would save a LOT of hub memory to have a separate instruction to do this.

    The amount of gates/logic to iomplement this should be trivial, and it solves:

    - yielding
    - task/thread exiting
    - breakpoints
    - some system library calls
  • mindrobotsmindrobots Posts: 6,506
    edited 2014-03-05 06:51

    And dare I say it... having this simple instruction is in the "propeller way" - simple hardware, push complexity off to software.

    +1

    Good job of "untsructioning" :lol:

    SYSTEM is better than YIELD, I prefer ER (executive Request) but that's nostalgic for me, some folks might like INT21H or INTRPT due to a tainted past, maybe PREEMPT since we are implementing pre-emption instead interruption?

    If you have kids, HEY! might seem appropriate. :smile:
  • ctwardellctwardell Posts: 1,716
    edited 2014-03-05 06:55
    I agree completely with Bill's suggestion.

    Fiddling with SETTASK for signaling purposes, while a novel idea, seems overly complex.

    C.W.
  • Heater.Heater. Posts: 21,230
    edited 2014-03-05 07:08
    mindrobots,
    ...maybe PREEMPT since we are implementing pre-emption instead interruption?
    I have seen this kind of statement here a few times. I don't get it.

    In what way is pre-emption different from interruption?

    From the point of view of the guy being preempted it's an interrupt.
    From the point of view of the hardware implementing it, it's an interrupt.
  • SeairthSeairth Posts: 2,474
    edited 2014-03-05 07:14
    Heater. wrote: »
    Sounds like it's time to remove the big multiplier.

    Yes it can execute parallel with other code but I'm convinced that will almost never be used in practice. Firstly because it's hard to do. Secondly because often it's impossible. Compiler writers are almost certain to not make use of that parallelism feature in code generation especially in the face of threading. (Am I right compiler writers?).

    With the problems it causes for threading it seems we could do without the big multiplier.

    Unless of course it shares a lot of it's logic with other functions in cordic or elsewhere.

    So this got me thinking about alternative approaches. In particular, I started wondering what it would take to move some of these parallel capabilities off-P2. For instance, Parallax currently sells the uM-FPU, which provides floating-point and fixed-point math routines. Now, what if you were to do the same sort of thing to the P2, moving all of the CORDIC, big-multiplier, etc to a separate chip that's accessible via SerDes? Suppose a dedicated math chip were developed that contained, say, 4 of each function. And further, suppose that the internal clock could run upwards of 4 times (or more?) as fast as the P2 itself. I could see the following pros/cons:
    • PRO: Frees up (significant?) space for other features.
    • PRO: Possibly allow for faster clock speed in P2?
    • PRO: External chip could be revised (e.g. to add FP math, FFT, etc.) without having to release a new version of P2.
    • PRO: External chip could actually be sold for use with other MCUs (e.g. add advanced maths to arduino, for instance)
    • CON: Off-chip (even if running at higher clock speed) would be slower than on-chip.
    • CON: Increases complexity and code
    • CON: Increases overall price (I'm assuming that the P2 wouldn't be any cheaper) for those that need the functionality.

    NOTE: I am *NOT* suggesting that this should be done. Unless it makes a lot of sense. In which case, I am. :)
  • mindrobotsmindrobots Posts: 6,506
    edited 2014-03-05 07:17
    Heater. wrote: »
    mindrobots,

    I have seen this kind of statement here a few times. I don't get it.

    In what way is pre-emption different from interruption?

    From the point of view of the guy being preempted it's an interrupt.
    From the point of view of the hardware implementing it, it's an interrupt.

    There is no difference. It's not my term. If someone pre-empts me I get just as cranky as if they interrupt me.

    I see it as multi-threading is either cooperative (the thread must yield control) versus pre-emptive (control WILL be taken away from a thread). The mechanism for the pre-empting is not consequential, just knowing that it will happen and that it has to be dealt with is enough. The thread, every pre-emption is an interruption whether it is an actual interrupt firing off some where or if it is dad looking at his watch every so often and after an hour yelling, "that's enough TV!".

    I don't know why people make the distinction, I should not perpetuate it. Some folks don't like to hear propeller and interrupt in the same sentence.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-03-05 07:25
    Re: Removing parallel features.

    Yes, they cause problems with tasking. But, there are a lot of COG or TASKRET type use cases where they are a nice benefit. Having nice, fast, big math on chip is a real plus!

    Besides, there are lots of ways we can employ those features and use tasking. Rather than let tasking dilute some otherwise killer features, perhaps it makes better sense to make sure we identify when using those features makes the best sense. One example would be the math COG as we had on P1, due to the need to get the higher order math done in software.

    Once it's "off chip", then we increase BOM part count, potential complexity of interaction, variation in components, etc... Fair to think about, but I really don't see major benefit to removing these things.
  • Bill HenningBill Henning Posts: 6,445
    edited 2014-03-05 07:28
    moving math off chip is a bad idea

    - even with 66Mhz SPI, setting up MUL 32x32, and reading result will take more than 128 * 3 = 384 clock cycles. VS. 16.

    VERY BAD IDEA.
  • Kerry SKerry S Posts: 163
    edited 2014-03-05 07:29
    mindrobots wrote: »
    There is no difference. It's not my term. If someone pre-empts me I get just as cranky as if they interrupt me.

    I don't know why people make the distinction, I should not perpetuate it. Some folks don't like to hear propeller and interrupt in the same sentence.

    Perhaps we should call it "Self Supervised Tasking" and end up with an SST Propeller Cog
Sign In or Register to comment.