Shop OBEX P1 Docs P2 Docs Learn Events
Prop2 Analog Test Chip Arrived! - Page 11 — Parallax Forums

Prop2 Analog Test Chip Arrived!

17891113

Comments

  • ElectrodudeElectrodude Posts: 1,657
    edited 2016-11-25 04:56
    cgracey wrote: »
    David at Parallax had a neat idea about making COGINIT be able to launch cogs with some flags that restrict the started cog's usage of hub memory and I/O pins, as well as doing COGINITs and COGSTOPs. The idea is to be able to have a bullet-proof mode, where during development, things can be limited to not allow a rogue cog to crash the whole chip. This would be good for on-chip development. I always pictured one Prop2 being the development tool and another being the slave. Dave was saying that it would be much nicer to have immediate access to the memories and things. I agree. It would also mean that you wouldn't wind up needing two monitors and two mice for most projects, as they could be on the same system. Think of a PLC that runs while you can make modifications to it, even storing the source code locally on it.

    Taken to the extreme, this would entail memory protection by range and a bunch of other things, but we don't need that now. Just some simple limiters to allow cog development without jeopardizing the development system's processes. This could be maybe ten flops per cog.

    Instead of making COGINIT more complicated, why not start every cog with full privileges and let it drop its own privileges before jumping to untrusted code? That will make it easy to start LUT-paired cogs with different privileges. Then, you can have a sandboxed cog that can still make requests through LUT to its paired privileged cog. This would have neat uses like safely running untrusted code from somewhere (like over a network) and still being able to interface with it.
  • jmgjmg Posts: 15,173
    cgracey wrote: »
    David at Parallax had a neat idea about making COGINIT be able to launch cogs with some flags that restrict the started cog's usage of hub memory and I/O pins, as well as doing COGINITs and COGSTOPs. The idea is to be able to have a bullet-proof mode, where during development, things can be limited to not allow a rogue cog to crash the whole chip. This would be good for on-chip development. I always pictured one Prop2 being the development tool and another being the slave. Dave was saying that it would be much nicer to have immediate access to the memories and things. I agree. It would also mean that you wouldn't wind up needing two monitors and two mice for most projects, as they could be on the same system. Think of a PLC that runs while you can make modifications to it, even storing the source code locally on it.

    Taken to the extreme, this would entail memory protection by range and a bunch of other things, but we don't need that now. Just some simple limiters to allow cog development without jeopardizing the development system's processes. This could be maybe ten flops per cog.

    Interesting idea.
    Would this also Protect the Clock control registers ?

    I think the SysCLK is still chip-wide, and once the PLL is started and selected for example, you cannot read/check the RC oscillators ?
    One useful re-purpose would be to allow the RC Slow to operate as a Watchdog oscillator.

  • cgraceycgracey Posts: 14,152
    cgracey wrote: »
    David at Parallax had a neat idea about making COGINIT be able to launch cogs with some flags that restrict the started cog's usage of hub memory and I/O pins, as well as doing COGINITs and COGSTOPs. The idea is to be able to have a bullet-proof mode, where during development, things can be limited to not allow a rogue cog to crash the whole chip. This would be good for on-chip development. I always pictured one Prop2 being the development tool and another being the slave. Dave was saying that it would be much nicer to have immediate access to the memories and things. I agree. It would also mean that you wouldn't wind up needing two monitors and two mice for most projects, as they could be on the same system. Think of a PLC that runs while you can make modifications to it, even storing the source code locally on it.

    Taken to the extreme, this would entail memory protection by range and a bunch of other things, but we don't need that now. Just some simple limiters to allow cog development without jeopardizing the development system's processes. This could be maybe ten flops per cog.

    Instead of making COGINIT more complicated, why not start every cog with full privileges and let it drop its own privileges before jumping to untrusted code? That will make it easy so that when you start LUT-paired cogs, you can drop the priviledges of just one of the cogs. That way, you can have a sandboxed cog that can still make requests through LUT to its paired priviledged cog. This would have neat uses like safely running untrusted code from somewhere (like over a network) and still being able to interface with it.

    Either way. I kind of like having COGINIT handle it because it doesn't affect the cog app. It would look like this: right now, the D in COGINIT is used to select the cog and mode, beyond bit 5, I think, those bits are don't-care and usually zeroes. By having non-zero possibilities for those bits, we could, say, have 16 bits to signal exclusions for each set of four I/O pins. Another bit could be set to inhibit COGINIT/COGSTOP privileges. Another for LOCKs, another for ATN interrupts, and so on. How to block the memory is the most complicated issue. To do real secure memory management is way beyond what we have time for now. We just need some protection for development usage.
  • potatoheadpotatohead Posts: 10,261
    edited 2016-11-25 05:04
    :D

    This is likely to make me happy. Be sure and think about where a set of dev tools might go, and that whatever isolation solution happens, that RAM is insulated.

    Originally, back when we were discussing features and the ROM, I asked for a write protected region of RAM. This was part of why.



  • ElectrodudeElectrodude Posts: 1,657
    edited 2016-11-25 05:14
    Will there still be room in COGINIT's D to give the two cogs in a pair different priviliges?

    If not, and if there are any extra instruction slots, you could still add an instruction for dropping priviliges in addition to doing it through COGINIT.
  • cgraceycgracey Posts: 14,152
    Will there still be room in COGINIT's D to give the two cogs in a pair different priviliges?

    If not, and if there are any extra instruction slots, you could still add an instruction for dropping priviliges in addition to doing it through COGINIT.

    Good idea. The original and new could just OR together.
  • cgraceycgracey Posts: 14,152
    potatohead wrote: »
    :D

    This is likely to make me happy. Be sure and think about where a set of dev tools might go, and that whatever isolation solution happens, that RAM is insulated.

    Originally, back when we were discussing features and the ROM, I asked for a write protected region of RAM. This was part of why.



    Yes, memory, pins, and certain hub instructions need protection. So, if a cog throws a fit, it doesn't mess up anything else.
  • Go Chip!

    Happy Turkey day. Same to all of you.

    I'm home, stuffed, updating my FPGA. :D

  • RaymanRayman Posts: 14,643
    Sorry, but I think memory protection for a microcontroller is a bit too much.
    Makes a lot of sense for devices that you want to run somebody else's code on.
    But, we're going to be running our own code, or at least code that we have source level access to...
  • Hmm... the protection thing is an interesting idea. I like the idea of sandboxing a cog. However, I see this being a non-trivial change to the design. Unless there happens to be a clear subset of features that can be easily disabled with a single bit flag, this feature would add complication. And risk. As the design currently stands, no one has any expectation of isolation. Once you add something like this, there will be an expectation. If that expectation cannot be met, it will be worse than not having the expectation in the first place.
  • I'm with you on this one Seairth, The P2 MAY be a transitional processor that lets a lot of nice mini-computer like systems be designed with it, but everything like this that gets added takes away from what made the original Prop so great. With most micros, it takes a couple of weeks of learning the configuration bits just to get to where you can start using it. It isn't that way with the P1 and I hope it doesn't turn out that way with the P2.

    Features are a wonderful thing, but I watched the "Hot P2" get "Just one more thing and Just one more thing 'd " into extinction. A bit or two that might make development easier is fine... but we've been in this exact "wrap up" stage before.... gee... two, three years back. All those wonderful memory management, etc's, might best wait until Parallax saves up that half a billion dollars to give Intel a run for its money.
  • cgraceycgracey Posts: 14,152
    edited 2016-11-25 17:45
    This is not real memory protection, but something to prevent a cog from taking the chip down when the chip is also being used as the development system. Two bits determine memory and hub op setting (00 = no restrictions) and sixteen bits inhibit pin writing for groups of 4 pins (1 = writes ignored). Normally, these bits are all 0, anyway, but can be made non-0 to set limitations. For the hour and few gates this would require, it would be worth it. I would have had this done yesterday, but I'm deep into the SPICE simulation for the PLL, right now.
  • RaymanRayman Posts: 14,643
    Well, I guess I can see some value to preventing pin access to things like the flash chip and keyboard and mouse and SD card for this application.

    Can you make an "event" that triggers when undesired access is attempted?
    Then, user could know there's a bug in the code they are writing...
  • cgracey wrote: »
    This is not real memory protection, but something to prevent a cog from taking the chip down when the chip is also being used as the development system. Two bits determine memory and hub op setting (00 = no restrictions) and sixteen bits inhibit pin writing for groups of 4 pins (1 = writes ignored). Normally, these bits are all 0, anyway, but can be made non-0 to set limitations. For the hour and few gates this would require, it would be worth it. I would have had this done yesterday, but I'm deep into the SPICE simulation for the PLL, right now.

    Okay, I misunderstood. But I still don't see how this is going to work effectively. While the debugger cog may COGINIT the "first" application cog, it wouldn't be starting any of the other cogs. All COGINITs trigger an initial debug hook in hub memory, but that's after COGINIT has been called. Given this, it makes more sense to leave COGINIT alone and add a separate instruction (or two) that would be called by the hooked debug code just after a COGINIT.

    Also, how would cogs behave when they attempt to perform a restricted operation? I'm guessing that such operations should be trapped by the debugger, then use SETBRK D to check for the restricted trap condition. If that works, providing fine-grained restrictions would allow a debugger to not only protect itself, but also detect situations where a cog is erroneously using any shared resource that it shouldn't be (i.e. potentially clobbering other application cogs).

    Or maybe there's another way. What if you had an instruction that simply sets a "trap shared resource instructions" flag. Then, every time a cog is about to execute one of those instructions (hub op, smart pin op, etc.), the debugger routine is called to check whether it's safe or not.

    Overall, though, none of this sounds really solid...
  • jmgjmg Posts: 15,173
    Seairth wrote: »
    Overall, though, none of this sounds really solid...

    I think it is very hard to make anything really solid, in terms of all possible errant action, or even attacks.

    Common in Flash MCUs is a keyed/simple pass word access for Write, and a means to lock out write, so a stack error, or errant pointer cannot permanently corrupt code.

    Write protection flags in P2 should be simple HW, and give some protection against errant pointers, but that also means self-modifying code is off the table.

    The HUB memory could be mapped into asymmetric zones, for Shared Data (free R/W) and Code/Rom (read only)

    Pin mapping exclusion could look good as a marketing bullet, but unless you have pin-pointers, how likely is it to accidentally access a wrong pin ? - I can see some use for novice-protection, but in that case, would their library not enable the (bad) pins by mistake anyway ?

    The P2 I think already has Modulus opcodes, that can help with pointer-constraining, tho at some small run-time cost. This could be a compile time option, like Range checking is on PC compilers.

    From a system-reliability viewpoint, I'd rate loss-of-clock handling, ahead of memory masking.
    Right now, I think P2 requires an external watchdog to manage loss-of-clock ?
  • cgraceycgracey Posts: 14,152
    Seairth wrote: »
    cgracey wrote: »
    This is not real memory protection, but something to prevent a cog from taking the chip down when the chip is also being used as the development system. Two bits determine memory and hub op setting (00 = no restrictions) and sixteen bits inhibit pin writing for groups of 4 pins (1 = writes ignored). Normally, these bits are all 0, anyway, but can be made non-0 to set limitations. For the hour and few gates this would require, it would be worth it. I would have had this done yesterday, but I'm deep into the SPICE simulation for the PLL, right now.

    Okay, I misunderstood. But I still don't see how this is going to work effectively. While the debugger cog may COGINIT the "first" application cog, it wouldn't be starting any of the other cogs. All COGINITs trigger an initial debug hook in hub memory, but that's after COGINIT has been called. Given this, it makes more sense to leave COGINIT alone and add a separate instruction (or two) that would be called by the hooked debug code just after a COGINIT.

    Also, how would cogs behave when they attempt to perform a restricted operation? I'm guessing that such operations should be trapped by the debugger, then use SETBRK D to check for the restricted trap condition. If that works, providing fine-grained restrictions would allow a debugger to not only protect itself, but also detect situations where a cog is erroneously using any shared resource that it shouldn't be (i.e. potentially clobbering other application cogs).

    Or maybe there's another way. What if you had an instruction that simply sets a "trap shared resource instructions" flag. Then, every time a cog is about to execute one of those instructions (hub op, smart pin op, etc.), the debugger routine is called to check whether it's safe or not.

    Overall, though, none of this sounds really solid...


    I've been reviewing what would be needed to implement this and it's pretty simple. It would only be useful for development, though. It doesn't detect any violations, though that could be useful. It just limits access.

    Allowance of these disruptive instructions could be gated, as a group:

    COGATN
    SETBRK
    COGINIT
    CLKSET
    COGSTOP - except self
    LOCKxxx

    Hub writes could be gated to allow only addresses $00000.. $000FF.

    DIR bits and WxPIN data could be gated via one bit per every four pins.

    I see two options via one bit:

    0) no restrictions (default)
    1) limited pins + no disruptives + only $00000.. $0000FF hub writes

    Option 1 would put a cog in a pretty safe box.
  • jmgjmg Posts: 15,173
    cgracey wrote: »
    Hub writes could be gated to allow only addresses $00000.. $000FF.

    Is there room for more than 1 bit for Hub-Write-Map ?
    I can see wide variations in data-space requirements, so perhaps {256 / 1024 / 4096 / All} for 2 choice bits, or more for 3 etc.

  • pjvpjv Posts: 1,903
    While development on a P2 may be nice, it certainly is not NEEDED.

    I can not think of a commercial instance where this would be a necessity, as almost all folks will be developing from the more conventional approach using an external computer.

    This thing is complicated enough already, and we should not be adding to that complexity. It will be hard enough already for newcomers, or even old hands to grasp and embrace all the detail we now need to pour through to get things right. Thats what I hated about other micros..... the P1 was simple, the P2, while promising to be very capable, is starting to look not so pretty.

    Simplicity and symmetry rule!

    Cheers,

    Peter (pjv)
  • The Micro Python people see interactive on chip development as being very useful, and see it in the same terms we do. (Those of us who do, ahem. :) )

    http://traffic.libsyn.com/theamphour/TheAmpHour-323-AnInterviewWithTonyDiCola.mp3

    If this is inexpensive, it's worth it. The way I see it, all the spiffy features we've added will require understanding. Lots of ways to get that are needed and this is just one of the ways.

    If the scope stays there, it's more benefit than harm.

    I would allow more HUB for use by the streamer in the development COG, but what Chip proposed is fine.
  • cgraceycgracey Posts: 14,152
    edited 2016-11-25 22:31
    The cog's reads are never limited, just the writes. Whatever space we are going to allow, say, beginning at $00000, is going to be a formative matter, dictating how much every O.S. must allow for scratch space. It's not going to be practical to allow arbitrary video buffers, but some small space through which a cog can get maybe 4..16 longs to hub would be sufficient and practical to implement. Maybe the range could even be fixed by cog.
  • RaymanRayman Posts: 14,643
    Another approach might be to use the shared LUT.

    The cog being developed could have no hub or pin write access.

    The shared LUT cog could manage requests for HUB and pin writes.
  • evanhevanh Posts: 15,915
    I'm with pjv. I vote a straight no. Stop thinking about it, It'll create problems for sure.
  • cgraceycgracey Posts: 14,152
    Ok. I'll forget about it. Forgotten.
  • jmgjmg Posts: 15,173
    pjv wrote: »
    While development on a P2 may be nice, it certainly is not NEEDED.

    I can not think of a commercial instance where this would be a necessity, as almost all folks will be developing from the more conventional approach using an external computer.
    ? I'm not sure that was what was being proposed, or discussed ?

    I'd agree that most development still needs an external computer, but the details discussed here affect running a Debugger on the P2, vs using a second P2 as Debug Host.
    There is room for both approaches, they need not be mutually exclusive.

    Looking at various ways to prevent one COG from crashing another, is certainly a worthwhile exercise
    Other MCUs do put gateways in their critical functions, so this is neither new, nor that complicated.

  • So much that a real world usage of cogs does would need write access to hub to even function.
    I think it's such a limited subset that would benefit from it, that's it not worth spending the time or risk.

    Pin access restriction is an interesting idea, but I don't think it's worth the time or risk to venture down that road.

    Save these ideas for the next chip.
  • cgraceycgracey Posts: 14,152
    Roy Eltham wrote: »
    So much that a real world usage of cogs does would need write access to hub to even function.
    I think it's such a limited subset that would benefit from it, that's it not worth spending the time or risk.

    Pin access restriction is an interesting idea, but I don't think it's worth the time or risk to venture down that road.

    Save these ideas for the next chip.

    The risk would be very low, but to do this properly, it needs to be a forethought, not an afterthought.
  • cgraceycgracey Posts: 14,152
    edited 2016-11-26 08:20
    So, I've got only two things to complete:

    1) Finish testing new PLL - two days
    2) Change J/K reporting in USB smart pin per Garryl - 15 seconds
  • cgraceycgracey Posts: 14,152
    Oh, and these dang fuses.
  • jmgjmg Posts: 15,173
    cgracey wrote: »
    So, I've got only two things to complete:

    1) Finish testing new PLL - two days
    2) Change J/K reporting in USB smart pin per Garryl - 15 seconds

    Can I suggest ..
    1b) Spice Test Crystal Amp, for AC coupled clipped sine. (0.8v p-p)
    The possible design change outcome of this, is to add a CL=0pF config case for Xtal Buffer, if fMAX is found to be too low.

    2b) Add VCO Post Divider for SysCLK, this can be in Verilog, /1..255 ?
  • cgraceycgracey Posts: 14,152
    edited 2016-11-27 06:49
    jmg wrote: »
    cgracey wrote: »
    So, I've got only two things to complete:

    1) Finish testing new PLL - two days
    2) Change J/K reporting in USB smart pin per Garryl - 15 seconds

    Can I suggest ..
    1b) Spice Test Crystal Amp, for AC coupled clipped sine. (0.8v p-p)
    The possible design change outcome of this, is to add a CL=0pF config case for Xtal Buffer, if fMAX is found to be too low.

    2b) Add VCO Post Divider for SysCLK, this can be in Verilog, /1..255 ?

    If the maximum VCO frequency is 320 MHz, is someone really going to want to divide that by 256? I could see only 4 bits being useful.

    The 0.8V peak-peak XI input could even be tested on the test chip. I could run my function generator at low-voltage and high-frequency and observe its output to come up with a sensitivity curve.
Sign In or Register to comment.