Shop OBEX P1 Docs P2 Docs Learn Events
The New 16-Cog, 512KB, 64 analog I/O Propeller Chip - Page 52 — Parallax Forums

The New 16-Cog, 512KB, 64 analog I/O Propeller Chip

14950525455144

Comments

  • ctwardellctwardell Posts: 1,716
    edited 2014-05-01 08:19
    Heater. wrote: »
    Yes, all these schemes are equally evil :)

    Can you explain why you think the allocation table scheme is evil?

    Please refrain from the using the straw man that everyone will write objects that need to be assigned all the slots.

    Chris Wardell
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 08:21
    Ramon,

    Yep. Same useless argument.

    Nobody is trying to break anything. We all want the best outcome. I am trying to not break determinism. Others are trying to not break allowing a COG to reach it's full potential speed.

    It's a value judgement. Not exactly a technical one.

    At the end of the day it's Chip's value judgement that will decide how it goes.
  • potatoheadpotatohead Posts: 10,253
    edited 2014-05-01 08:22
    @Heater, I've not been there for a long while myself. At one point, I was interested and thinking about getting a dev board. Baggers mentioned he used his to consume workbench drawer space, and so there you go... The threads I followed at that time did involve using the sophisticated tools to understand what the chip will do, along with lots of detail in assembly on how to make them do it when exceptions arose, etc... I found the whole thing far more complex than I did anything we did on P1.

    Maybe I'll search. I've got a few things piling up, like I owe David some DAT section examples... So maybe I won't.

    Do we think we will end up with tools like that? Really?

    Do we think mortals will go and use them? Really?

    https://www.xmos.com/discuss/viewtopic.php?f=3&t=127

    Interesting quote from you on that one :)
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 08:26
    Chris,
    Can you explain why you think the allocation table scheme is evil?
    Yes.

    I want the PII yesterday. In fact we have all wanted the PII a thousand yesterdays ago.

    Such a scheme is more design work, more delay.
    Perhaps a bit more performance can be tweaked out that might be useful in some rare cases.
    I don't care. Get me the chip NOW.

    Besides, such a scheme is a bunch of complexity in using the thing that nobody needs. Complexity is evil.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 08:50
    potatohead,

    Yep, I have a couple of XMOS dev kits in their boxes consuming bookshelf space:)

    I had a lot fun trying them out. Flash some LEDs. Make a software UART. Get the heater_FFT working and so on.

    That all went easily enough. The Eclipse based dev tools are horribly bloated and a bit complex to get to grips with but I managed.

    Never needed to get into any assembler.

    I really like the ideas behind and facilities provided by the XC language they have there.

    What soured it for me?

    1) Those dev tools. They start to weigh on you after a while.

    2) Each core gets 64K RAM for code and data. So I have 4 cores and 256K of RAM. But still I have more hassle than I should putting a Z80 emulation and CP/M OS in there. Of course, nobody would want to do that but me:)

    3) The ports. Figuring out which pins are can be configured into what size I/O ports and what ports can be used with other ports. That might be OK for a professional mapping the thing onto some product design but it's a pain to just grab a dev board and get it doing what you want in the free hour or so I may have.

    4) The ports again. Pins are connected to cores. Want to move some of your code to another core. Smile I can't reach the pins I'm using any more.

    5) The marketing guys went bad. Decided to call threads "logical cores" and cores "tiles". Blech.
    Do we think we will end up with tools like that? Really?
    I really the hope the propeller tools stay simple.
    Do we think mortals will go and use them? Really?
    No.
  • ctwardellctwardell Posts: 1,716
    edited 2014-05-01 08:58
    Heater. wrote: »
    Chris,

    Yes.

    I want the PII yesterday. In fact we have all wanted the PII a thousand yesterdays ago.

    Such a scheme is more design work, more delay.
    Perhaps a bit more performance can be tweaked out that might be useful in some rare cases.
    I don't care. Get me the chip NOW.

    Besides, such a scheme is a bunch of complexity in using the thing that nobody needs. Complexity is evil.

    So just assumptions and what suits you and what you think others need.

    I'll just leave at that. This is why I waste little time here anymore.

    Chris Wardell
  • 4x5n4x5n Posts: 745
    edited 2014-05-01 09:16
    Heater. wrote: »
    Chris,

    Yes.

    I want the PII yesterday. In fact we have all wanted the PII a thousand yesterdays ago.

    Such a scheme is more design work, more delay.
    Perhaps a bit more performance can be tweaked out that might be useful in some rare cases.
    I don't care. Get me the chip NOW.

    Besides, such a scheme is a bunch of complexity in using the thing that nobody needs.

    I agree that the "replacement" (not really a replacement more like augmentation) to the P1 needs to come quickly and not change any of the things that make the P1 what it is. Things like soft peripherals, all IO and cores (cogs) being equal, deterministic timing, etc, etc, etc.

    I'm afraid that things like "allocation tables" for priority of hub access for the cogs will have a MAJOR impact on the deterministic timing. What do you do when objects you use require more then 100% of hub access and they're depending on a given frequency for timing? I'm also seeing that with direct hub access from the cores (cogs) outside of the access window and core to core (cog to cog) communication making those access windows less and less important. That makes all this allocation table complexity not worth it.

    A while back Ken posted a list of things that the larger customers of Parallax are asking for. In my never humble opinion that list should be the guide and all feature requests should be compared to that list!
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 09:29
    ctwardell,
    So just assumptions and what suits you and what you think others need.
    Yes, exactly.

    The same as you. Unless you are arguing for things that you don't actually want yourself. Which is possible I guess.

    What else are we supposed to do on a forum whose owners graciously invite us to contribute our opinions.

    By the way. Adding features, like any kind of HUB priority scheme, will take time and effort on Chip's part. There is no assumption there, I'm certain it's a fact. I want silicon now :)
  • potatoheadpotatohead Posts: 10,253
    edited 2014-05-01 09:44
    As do I.

    Even mooch adds considerable complexity. The beauty of the round robin concept is the timing of everything is a known, and it meshes together.

    We always know what code on a GOG will do.

    Add the passive mooch. Just one bit of control, moocher or compliant. Now let's say one COG is mooching. Some logic is needed to detemine whether or not the designated COG is going to use it's cycle. How do we do that? Would we not need to slow things down to add more processing steps per HUB cycle, or would we not require some signal to indicate a cycle use is going to happen? Further, there needs to be a decision at each cycle that isn't there now. How is that decision made?

    Now add a few moochers. Another decision and some logic, right? Which COG gets the mooch when several are looking to do so?

    Make all the COGS moochers? Now it's all decisions all the time.

    Adding just this feature would slow a timing path which is fast, simple, clean.

    It is this detail we really need to work on and test. Seems to me, this design isn't the place to do that, given we have a goal of real silicon before we can't.

    Teams of people have worked on these problems to varying degrees of success and complexity.

    It is not as simple as the idea is. Executing that idea, even just mooch, is going to take some real thought and work.

    Round robin was the product of Chip's thought over years. Doing these things isn't going to be any different.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 10:41
    I think the proper way to tackle this shared HUB memory bandwidth problem is not to tinker around with determinism violating HUB scheduling schemes that result in undesirable coupling between processes.

    No, what we should do is give each COG it's own chunk of RAM that it can access at full speed all the time.

    Throw out the round robin HUB access mechanism.

    Add a bit of memory management.

    We have 512K of memory, so let's chop it into blocks, say 1K each. Give each COG N number of 1K blocks to play in.

    Each COG gets full bandwidth access to it's blocks. COGs cannot access blocks they don't own.

    The number of blocks allocated to each COG is determined at compile time from the variables and buffers it declares.

    How do COGs communicate in this scheme?

    No idea. Perhaps put back that round-robin thing but this time only operating over a bunch of blocks that are "shared" between cores.

    NUMA anybody? Sounds perfect for what we want to do. http://en.wikipedia.org/wiki/Non-uniform_memory_access

    Sounds like a Prop III idea.
  • ctwardellctwardell Posts: 1,716
    edited 2014-05-01 10:51
    Heater. wrote: »
    ctwardell,

    Yes, exactly.

    The same as you. Unless you are arguing for things that you don't actually want yourself. Which is possible I guess.

    What else are we supposed to do on a forum whose owners graciously invite us to contribute our opinions.

    By the way. Adding features, like any kind of HUB priority scheme, will take time and effort on Chip's part. There is no assumption there, I'm certain it's a fact. I want silicon now :)

    The only reason I'm bothering to argue about this is that there are use cases, mainly hub execute, that would benefit, and if you don't want to use it you don't have to.

    The argument that it should not exist because someone might understand it poorly and misuse it seems illogical to me.

    If it is too complex to implement or will add significant delay, leave it out.

    Chris Wardell
  • koehlerkoehler Posts: 598
    edited 2014-05-01 11:07
    Heater. wrote: »
    Hey guys,
    But watch this. This will knock your socks off. See I just downloaded this turbo-video driver object. Drop it into my project and add some calls to it to display live data on this screen here.
    Oh, wait, that's not right, my servos have gone mad. That robot arm is going to smash something. That's odd it's been working all week. Can't get anything on the screen either, strange. Let me just have a look at this....just a sec...
    Hey guys, where are you going? Come back it does work. Well it did, honest, this won't take long...guys...guys...oh ****.

    And the moral of the story is:
    Don't break determinism with any silly HUB slot sharing scheme. Don't add coupling between COGS, in timing, by doing away with the round-robin HUB.
    As our story illustrates this is akin to all the problems of building programs out of component parts that need interrupts and priorities and/or an RTOS.
    If you can do it without inducing the above embarrassment, then all well and good.

    So, I'm confused here.

    Someone has technical competence to do all of this, but not enough common sense to read the Object's docs that stipulate that the object needs 2 COG's, with one primarily being used as a hub bandwidth donor?
    AND, out of the 16 COG's he doesn't happen to have enough free COGs to support it.
    AND, most importantly, there is no way to do what this object actually does WITHOUT having that required bandwidth.

    In reality, this User would never have had the OPTION of using this object because it never could have been written without hub sharing.

    I can see the utility in trying to keep things simple wherever possible, however there comes a point where making something idiot-proof is going to seriously impact a lot of User's ability to innovate.

    I'm still failing to see where the altar of Determinism is being defiled with hub sharing.

    If some objects require 2,3, 5 COGs, how does that affect in any way the Determinism of the other COGs running their legacy objects?

    Cripes, we've got 16 of them now, and with all the improvements its almost arguable that thats 'almost' too many.
    A brain-dead newbie like me who did something stupid like what was described would have posted on the forum and been told to RTFM the Object's doc where it plainly states it needs 2,3,5 Cogs to work properly.

    I really must be missing something aside from the possible point that a brain-dead newbie like me might somehow end up tossing my Legacy COG code into a COG thats already been assigned to share its hub slot?
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 11:15
    ctwardell.
    ...someone might understand it poorly and misuse it seems illogical to me.
    Today we are celebrating the 50th birthday of the BASIC language. A language so dumb and stupid and slow that it should have been drowned at birth. But it stormed the world and enabled people to do all kinds of stuff with computers that they would never have imagined otherwise. It grew the industry enormously. Yes, those users were to dumb or busy to get to grips with ALGOL or FORTRAN.

    Every little twist in complexity is a barrier to acceptance.
  • Roy ElthamRoy Eltham Posts: 2,996
    edited 2014-05-01 11:17
    All I see is a few people arguing for a feature that they think they need without considering the importance, scope, or full implications of that feature on the chip as a whole. Cluso has given some what if examples of things he thinks might need the feature. I think the chip will handle those fine without it. I think you people are underestimating just how powerful this P2 is... 200Mhz clock, 16 COGs, 100 MIPS per COG, smart analog i/o pins, many new instructions...

    Theoretical throughput aside, you need instructions to do something with the data you are pulling from HUB to the cog (or vice versa). Currently, you have 8 instructions worth of cycles between accesses. You can read/write 4 longs of data each access. Those 4 longs either need to come in from the pins are be sent to the pins. Unless you are banging out 32 pins at a time, you are likely going to need more than 8 instructions per access to do anything useful.

    The only case that this helps really is hubexec/LMM. So we are talking about hubexec being able to go ~50 MIPs verses ~100 MIPs barring any other hub accesses, branches, or waits. The solution I prefer to this is one that Chip has already said he would consider when the time comes, which is to widen the window between HUB and COGs to allow 8 longs instead of 4 per access.

    Unless someone can show me an actual codable use case that will use more than one hub access every 8 instructions (aside from hubexec/lmm), I'm going to continue doubting any of you have really thought about this feature much beyond the starry eyes wish for theoretical bandwidth calculations without any practical use case possible. At best, you might be able to come up with some case that would benefit from doubling the HUB/COG access rate or packet size, but I seriously doubt it would be truly required and I bet it would be so limited as to be not really all that useful.
  • koehlerkoehler Posts: 598
    edited 2014-05-01 11:23
    Heater. wrote: »
    Chris,

    Yes.

    I want the PII yesterday. In fact we have all wanted the PII a thousand yesterdays ago.

    Such a scheme is more design work, more delay.
    Perhaps a bit more performance can be tweaked out that might be useful in some rare cases.
    I don't care. Get me the chip NOW.

    Besides, such a scheme is a bunch of complexity in using the thing that nobody needs. Complexity is evil.

    OK, however with due respect, and I mean it, that comes across as rather personal to you and some others here.
    What is actually important, and proper, is for Parallax to look at the pros and cons of the actual idea, and make a determination on whether or not there is any real impact or delay to the overall project in time, money, or whatever.

    SO FAR, I have seen NOTHING from Ken or Chip on what the current state of the project is, and not actually suggesting that they should provide it.
    Unless it would help somehow with the arguments in the thread to be more productive.

    There is a timeline of some sort, that probably is FPGA release soon at x% completion of X% features.
    When that comes out, there will be Parallax and Community testing. Parallel to this will be either bug fixes from noted issues, and continued work on the remaining Y% of feature.
    Additionally, depending upon testing and time, Chip may work on "nice to have features" as time allows or is ultimately available before final testing and potentially a shuttle run.

    It probably WOULD help if Chip/Ken could weigh in on some of these questions, as its possible all lot of the arguments "I want it now!" may be skewing forumistas POV as there may actually be time enough.

    IF hub sharing WERE to actually be relatively simple to implement by Chip, shouldn't that be Parallax's decision to make from a business perspective rather than to simply appease?
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 11:27
    koehler,


    This is not just an issue that impacts the "brain-dead newbie".


    Every little hiccup like that costs time and hence money for even professional developers.


    Yes everyone should RTFM.


    But the FM should be as short and simple as possible with no hidden surprises.
    In reality, this User would never have had the OPTION of using this object because it never could have been written without hub sharing.
    This is perhaps true. It is my contention that number of times that might happen is very small and that the added comlexity and development time needed to make it possible is not worth it.


    A chip that works now is worth a thousand times more than one they may possibly work 10% faster in some cases in a months time.
  • koehlerkoehler Posts: 598
    edited 2014-05-01 11:32
    4x5n wrote: »
    What do you do when objects you use require more then 100% of hub access and they're depending on a given frequency for timing? I'm also seeing that with direct hub access from the cores (cogs) outside of the access window and core to core (cog to cog) communication making those access windows less and less important. That makes all this allocation table complexity not worth it.

    A while back Ken posted a list of things that the larger customers of Parallax are asking for. In my never humble opinion that list should be the guide and all feature requests should be compared to that list!

    What you do is simple.
    Look at the Object Docs before starting?

    You have x number of OBEX objects that use x COGs.
    You now have 16 COGs.
    You have a new Object that you want to use that says it needs y (2,3,5) COGs bandwidth.

    16 - x = remaining COGs
    If remaining COGs >= y, then you should feel safe.

    Or, bin the entire hub sharing concept, and force people to go back to the drawing board and interweave that Jumbo Object into some crazy dual, tri-COG program, every single time you simply don't have enough bandwidth from 1 COG.

    Since this is going to be VOLUNTARY anyways, it seems like an overall dumbing down of the new Props potential on the off chance that it may impact someone's project.
    Thats a great way to do business.
  • ctwardellctwardell Posts: 1,716
    edited 2014-05-01 11:34
    Roy Eltham wrote: »
    The only case that this helps really is hubexec/LMM. So we are talking about hubexec being able to go ~50 MIPs verses ~100 MIPs barring any other hub accesses, branches, or waits. The solution I prefer to this is one that Chip has already said he would consider when the time comes, which is to widen the window between HUB and COGs to allow 8 longs instead of 4 per access.

    For hubexec it is about reducing latency for branches and accessing the hub for data.

    In the general case it is more about latency to next access opportunity than about increasing raw bandwidth.

    Think of it more like reducing the seek time on a disk drive than increasing the instantaneous transfer rate.

    Chris Wardell
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 11:38
    koehler,
    ...that comes across as rather personal to you and some others here.
    Yes it does. And yes it is. On both sides of the debate. How else could it be? We are all just expressing our opinions on a forum where we were invited to do so.


    Let me make it even more blunt and personal. I want the PII in silicon as soon as possible. I don't want any stupid Smile thrown in there that is going to cause any more delays. Get me it, NOW.
    What is actually important, and proper, is for Parallax to look at the pros and cons of the actual idea, and make a determination on whether or not there is any real impact or delay to the overall project in time, money, or whatever.
    I agree. It's up to them. We can only voice our opinions.
  • potatoheadpotatohead Posts: 10,253
    edited 2014-05-01 11:46
    What can't be written without hub slot sharing? In your answer show how it can't also be parallelized.
  • koehlerkoehler Posts: 598
    edited 2014-05-01 11:50
    Heater, I have to admit that I am not sure considering Roy's post, what ultimately the throughput would be with hub sharing.
    However, is it really only 10%, or is it 50,100,200%, or greatly reduced latency?
    That seems a crucial question, and something Chip/Ken should discuss, as it could have a bearing on the product's ability to generate revenue.

    At this point, we have no way of knowing if this feature will in fact delay the new product tape-out at all.
    So, until or unless Chip/Ken state that this feature will in fact potentially cause a delay, it seems like it should not automatically be declared to be a cause for delay.

    I have confidence that Chip can ascertain the complexity involved, and Ken can look at that and the potential upside of the feature to their marketing of the product to reach a smart business decision.

    Also:
    "Every little hiccup like that costs time and hence money for even professional developers. "

    OK, so then the professionals who want to play it safe can restrict themselves to using Legacy/standard objects and avoid a case of the hiccups.
    The others can spend a little time investigating and use the same Legacy/standard objects as well as some really cool, fast ones that are unfeasible otherwise, right?
    Heater. wrote: »
    koehler,


    This is not just an issue that impacts the "brain-dead newbie".


    Every little hiccup like that costs time and hence money for even professional developers.


    Yes everyone should RTFM.


    But the FM should be as short and simple as possible with no hidden surprises.

    This is perhaps true. It is my contention that number of times that might happen is very small and that the added comlexity and development time needed to make it possible is not worth it.


    A chip that works now is worth a thousand times more than one they may possibly work 10% faster in some cases in a months time.
  • ctwardellctwardell Posts: 1,716
    edited 2014-05-01 12:01
    potatohead wrote: »
    What can't be written without hub slot sharing? In your answer show how it can't also be parallelized.

    Nice straw man, unless you can show me something that can't be done without slot sharing, nothing would benefit from hub slot sharing.

    Can you deny that hubexec would benefit from reduced latency when the cache needs loaded or data is needed?

    I'm amazed that there is evidently a group of people out there than cannot be expected to handle the concept of hub slot sharing, yet are capable of breaking a program into all manner of parts that can be executed in parallel and keep it all straight and running along smoothly, while managing to not encounter hub access delays among the various cogs running all those parallel tasks.

    Chris Wardell
  • Dave HeinDave Hein Posts: 6,347
    edited 2014-05-01 12:07
    potatohead wrote: »
    What can't be written without hub slot sharing? In your answer show how it can't also be parallelized.
    Or as I once heard a project manager say to his team, "We're adding more resources to the project so we can paralyze it." He meant to say "parallelize", but "paralyze" turned out to be an accurate description.
  • potatoheadpotatohead Posts: 10,253
    edited 2014-05-01 12:10
    I think hubex can benefit from parallelism. Run two hubex COGS and augment them both with PASM COGS just like we do on P1 now.

    That is why I asked the question. We have more COGS, and using them together maximizes the design overall.

    Where can this not be done?

    BTW: Agreed with Roy. When it's running, and the significant details are worked out, more longs per access makes a ton of sense.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 12:40
    ctwardell,
    I'm amazed that there is evidently a group of people out there than cannot be expected to handle the concept of hub slot sharing, yet are capable of breaking a program into all manner of parts that can be executed in parallel and keep it all straight and running along smoothly, while managing to not encounter hub access delays among the various cogs running all those parallel tasks.
    We understand the concept of HUB slots. And the possibilities to do other than simple round-robin sharing. And the consequences of doing so.

    And for this reason we come down against it :)
  • ctwardellctwardell Posts: 1,716
    edited 2014-05-01 13:14
    Heater. wrote: »
    ctwardell,

    We understand the concept of HUB slots. And the possibilities to do other than simple round-robin sharing. And the consequences of doing so.

    And for this reason we come down against it :)
    ctwardell wrote: »
    I'm amazed that there is evidently a group of people out there than cannot be expected to handle the concept of hub slot sharing, yet are capable of breaking a program into all manner of parts that can be executed in parallel and keep it all straight and running along smoothly, while managing to not encounter hub access delays among the various cogs running all those parallel tasks.

    I'm not saying you don't understand the concept, I'm talking about the the poor innocent masses that you are so nobly trying to protect from themselves.

    C.W.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-01 13:23
    Those poor innocent masses just want to be able to grab this and that software which transforms a Propeller from a useless hunk of silicon into a machine with UART, PWM, I2C, SPI, even video or perhaps USB and etherenet, etc peripherals. Like all those other chips out there.

    And have those things work with no fuss or surprises. Flexibility is king here, otherwise why not just use an STM32 F4 or whatever?

    Then they can add their application code and be done.
  • jmgjmg Posts: 15,140
    edited 2014-05-01 13:40
    Heater. wrote: »
    It's a value judgement. Not exactly a technical one.

    At the end of the day it's Chip's value judgement that will decide how it goes.

    Ah, a value judgement, not a technical one. Now it becomes clearer.

    That explains why I cannot follow or fathom the 'Logic' behind the reaction of not giving users control.

    It is not derived from technical, it is based on someone else's perceived value, like art, or music.
    Even there however, I choose my own music and art.
  • ctwardellctwardell Posts: 1,716
    edited 2014-05-01 13:40
    Heater. wrote: »
    Flexibility is king here

    Unless you want to make use of otherwise wasted hub slots...

    C.W.
  • Todd MarshallTodd Marshall Posts: 89
    edited 2014-05-01 14:13
    Not having a detailed description of P2, I'm left with the P1 model to work with. HUBEXEC (presumably ability to compute in the HUB as well at in the COGs) just elevates my original concern (or rather opportunity).

    In P1 with 8 COGs I have 16 clock latency between the HUB and a COG. I felt dropping that latency to 0 for 1 COG or 2 for 2 COGS would naturally be a good thing ... and simple to do with my conceptual model of how the Propeller design is implemented (the HUB being an 8 channel MUX with a 3 bit counter driving it). Further, it's like a train roundhouse with 32K longs of memory on the turntable (HUB).

    But if it was an 8 (or more) element circular queue (LUT) of time slots, then putting COG numbers into the queue would allow less latency and more bandwidth and flexibility. And it wouldn't require any change to the conceptual model if COG numbers 0 to 7 (or 0 to 15) were placed into the queue initially. I didn't conceive of any effect on predictability at all, and still don't.

    True, the queue would need to be set up but so do things like crystals and stacks and clock frequencies and clock modes. I would even be satisfied with a constraint that the queue and its contents were "pre-defined, one-time setable constants" as the manual describes for the others.

    Thinking the P2 with 16 COGs doubled the latency made me more concerned. I was evidently wrong because (if I'm not mistaken) the clock is running twice as fast.

    Then I thought, if I sequenced through the queue, modulo the number of COGs I wanted to service, I would have optimal thru put ... and again wouldn't affect predictability.

    Challenged to present an application where I would favor this behavior:
    How about a multichannel sampling oscilloscope. I set one COG to display results from memory. I then add COGs one by one up to the number of channels I want to sample and display. Further, some channels I want to display are changing at a slower rate or are glitch free. But one channel is giving me glitches and I want to be sure to capture them. I want to over sample this channel and watch for glitches.

    Does such a model justify a Propeller where slots are time slots and not COG slots?

    And BTW, I don't know what a "mooch" is.
Sign In or Register to comment.