Shop OBEX P1 Docs P2 Docs Learn Events
Restricted mode for cogs — Parallax Forums

Restricted mode for cogs

u12u12 Posts: 4
edited 2014-05-18 19:13 in Propeller 2
The current chip in the working looks promising. What I haven't seen, however, is a way to restrict cogs such that a restricted cog would not be allowed to arbitrarily manipulate the chip's resources. The presence of 16 separate cogs would appear to be ideal for running code in isolation from other code, yet from what I've read about this chip so far, isolation cannot be guaranteed with the current design, since communication policies between cogs cannot be defined and access to resources cannot be restricted. Unless my assumptions about this aspect of the design are wrong, I would like to suggest that a possibility be introduced to start cogs in a restricted mode, where

-access to pins is restricted,
-access to memory is restricted,
-use of certain instructions is restricted.

Ideally, access to pins should be selectable on a per-pin basis, which would require at least 64 bits of state per cog. If that is seen as too much, even a coarser grained control would still be better than none at all. I don't know how the cogs do pin addressing. Since there are 64 pins, I would assume that 6 bits of an instruction must be used to refer to a certain pin. So, a less fine-grained way to specify access restrictions to pins would be to have 6 bits of state per cog to mask the pin addressing. A similar approach of address masking could be used to restrict the hub RAM address space that would be accessible to a cog in restricted mode. If even more control bits could be used, it might even be possible to specify the kind of access rights, like read-only or write-only, instead of merely access or no access.

The set of instructions, whose execution would be restricted in restricted mode, would have to include at least those instructions that would allow a cog to lift the once set restrictions. The behaviour, if an attempt to execute a restricted instruction is attempted, should depend on what is easiest to implement. For example, accessing a masked out memory region could either fail silently by being rewritten to a NOP or by returning some fixed value or fail hard by generating some error signal or halting execution and shutting down the cog.

I don't want to be picky about the granularity of these restrictions but would like to point out that at least some form of restricted mode for cogs would be necessary. If nobody else sees use for this, I would already be happy if restricted cogs could only access half of the hub RAM and could be denied access to all pins and denied starting new cogs and, of course, changing its own mode operation, although I certainly believe that other people would also find a more fine grained mechanism to restrict a cog's capabilities and priviledges useful, even if only to exclude the possibilty that components that they have completely implemented themselves don't interfere with each other.

The concept as such could also be extended to other sorts of run modes. If for instance, if it is known that certain instructions won't be used, a cog could switch to a mode where certain functional units are switched off to save power, if certain functional units are logically independent modules.

Comments

  • SRLMSRLM Posts: 5,045
    edited 2014-05-18 13:25
    Is it really likely that you'll be running untrusted code on the P2? I'd imagine that for all designs you'll be carefully reviewing the entire code base that goes onto the chip, so there won't be un-vetted third party code.
  • u12u12 Posts: 4
    edited 2014-05-18 13:50
    As I mentioned, restricting cogs would also facilitate the verification of your own code, since entire classes of access violations could be outruled in your proofs. Whether running other untrusted code makes sense would certainly depend on the application. Since the chip seems to be powerful enough to build a (simple) general-purpose computer from it, it is worth thinking about, especially since, as I see it, it could easily be implemented with virtually no silicon overhead, since not exactly complicated new functionality is desired but a way to restrict functionality that's already there. Having to run untrusted code in virtual machines would add significant overhead to a system and cost memory space and processor cycles. An example system where you might want users to write and share code for the processor could be an ebook reader run by a P2. As I said, I would already be happy with a very crude, simple and primitive implementation of restricted mode. More sophisticated schemes could definitely wait for P3. The problem is currently, that there is a nice chip with conceptually isolated cores, which, on closer look, however, provide no means to guarantuee their separation and freedom from interference at all, besides carefully writing and running bug-free code.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-18 13:51
    u12,

    Welcome to the forum.

    Now, why would you be wanting to introduce such features into a micro-controller? We are not running a multi-user system here. We are not fetching random unknown code from the internet at run time.
  • u12u12 Posts: 4
    edited 2014-05-18 13:58
    Heater. wrote: »
    Now, why would you be wanting to introduce such features into a micro-controller? We are not running a multi-user system here. We are not fetching random unknown code from the internet at run time.
    Why are you not? What stops you? And as I said, the mechanism could also help you in keeping your own code correct by failing early if it was not correct.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-05-18 14:05
    I personally see those things as out of scope for this chip.

    If you want to do testing of that kind, the LMM style of code can provide a lot. The kernel would fetch the instructions, and act on them as permitted. Yes, it's slower, but likely effective for code testing.

    Secondly, a trusted boot loader could very easily prevent untrusted code from running. The device will ship with encryption facilities sufficient for this. Develop on an open system, encrypt, send code to trusted system, done.
  • u12u12 Posts: 4
    edited 2014-05-18 14:36
    potatohead wrote: »
    I personally see those things as out of scope for this chip. If you want to do testing of that kind, the LMM style of code can provide a lot. The kernel would fetch the instructions, and act on them as permitted. Yes, it's slower, but likely effective for code testing. Secondly, a trusted boot loader could very easily prevent untrusted code from running. The device will ship with encryption facilities sufficient for this. Develop on an open system, encrypt, send code to trusted system, done.

    The point is that running untrusted code might be desirable. The problem is, that it cannot safely be done at present. Why would an implementqtion of the feature be out of scope, if its simplest form, as far as I can see it, would only require a single mode bit per cog, that could be OR'd with the most significant bit of each hub memory access attempt, thus enforcing that cog's restriction to only one half of the hub memory, could likewise be OR'd with some bit of the pin access address, thus limiting the accessible pin range, or alternatively indicate that no pins can be accessed at all, and indicate that the cog cannot change its mode anymore, once it is in restricted mode. Thus, only one new command "set-restricted-mode" would be required.

    Are 16 storage bits (one per cog) and a few OR gates really too much overhead? To me, it would provide so much value, I would rather see several other features under discussion in this forum left out. As I said, the most basic implementation imaginable of the feature would already be enough for this chip. More flexibility could be considered for future Parallax processors. I admit, I'm not familiar with the effort necessary to add the feature to the chip, but reading here how seemingly much more complicated things are implemented in a day, I wonder whether this could not be added within an hour or even less.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-18 14:39
    u12,


    If one wants to run a multi-user operating system on such a small machine there are already ways to do it that are thousands of times better. You cannot beat the size and price of a Raspberry Pi for example. There are thousands of cheap embedded ARM processors out there that will do that.


    From a practical point of view such protection features add transistors and power consumption and design complexity. This chip has already been 8 years in the design phase! It took Intel years to get the protected modes of their x86 chips right. I used to have a copy of the errata for the Intel 286, under NDA, that was a two inch thick document detailing all the ways their protected modes were broken!


    I'm pretty certain most users would not ever use such features.


    You do have a valid point about catching coding errors. It's not fool proof though. Such "out of bounds" exceptions may not happen except under some weird combinations of input you have never tested for. At which point your code fails, perhaps in a product out in the field, which then stops working for no apparent reason much like bugs crash things anyway.


    I have always wondered why processors don't throw such exceptions when you do such things as add two integers that overflow your number range. No, they just silently give you the wrong result and continue. Great. How that kind of behaviour has been acceptable since the dawn of time is beyond me.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-18 14:47
    u12,
    I wonder whether this could not be added within an hour or even less.
    Have you ever written any software?

    Even the simplest things can take far longer than that to get designed, written, documented, tested, debugged, fixed. Remember the average rate of production of code in the software industry is 10 lines per day!

    Designing hardware in a hardware description language is, as far as I know, even harder. I must give it a go some day.

    Investing time into something few people want or need would not be a good investment.

    Speaking of protected modes. It was about ten years after the introduction of such features with the 286 chip that people actually started to use them with Windows 95!
  • potatoheadpotatohead Posts: 10,261
    edited 2014-05-18 14:47
    Welcome. But I'm with Heater and just don't agree.

    Frankly, using such modes is a crutch. It might work, then fail as Heater says. The best investment to make is very solid unit testing and hardware design to handle faults at this scale.
  • evanhevanh Posts: 16,032
    edited 2014-05-18 15:19
    Maybe after another ten years and lots of FPGA based redesign experiments and a process shrink down to 35 nm there might be a case for saying there is all this spare space to throw away and excess MIPS to burn ... I know lets support licensed bloatware.

    Actually, a really good reason not to support code protection is compatibility. There is immediately some expectation of continuity of architecture with the soft padded, API wrapped, OS managed, IT setup that comes from fully protected code. The redesign of the Prop2 is not contrained by compatibility with the Prop1. That's a good thing.
  • jmgjmg Posts: 15,175
    edited 2014-05-18 19:13
    u12 wrote: »
    The point is that running untrusted code might be desirable. The problem is, that it cannot safely be done at present.

    This seems a fundamental contradiction.
    How can running untrusted code ever be wholly safe ??
    Even half the pins, or half the memory, are still important and likely vital resource, and you've not even considered the global configs of things like PLL.

    The P2 will have fuses and encryption, and that should be enough (?) to design moderately hardened systems.
    COGs cannot directly access other COGs memory, so there is already that level of corruption protection.

    It may be Chip plans a fuse to determine which COGs can access Boot memory.
    IIRC COG0 is the first on to the plate, and a fuse could mean only COG0 can write/read serial FLASH.


    I could see a script-engine that supported sets of masking params that forces limits on running scripts, but that is a Software level problem, not a silicon level one.
Sign In or Register to comment.