Shop OBEX P1 Docs P2 Docs Learn Events
Propeller II update - BLOG - Page 27 — Parallax Forums

Propeller II update - BLOG

12425272930223

Comments

  • hippyhippy Posts: 1,981
    edited 2012-02-12 16:07
    Cluso99 wrote: »
    Anyway, back to reality. Hippy decoded the spin instructions and interpreter quite well using a form of statistical analysis. However, most of this work was done after Chip threw down the gauntlet to work out the ROM encryption.

    To give credit where due it was Phil Pilgrim (PhiPi) who did some amazing stuff with statistical analysis, well above my league. I took a more empirical approach, noted a very common pattern in most PASM code which was likely to be repeated at the start of the interpreter and got lucky.

    http://forums.parallaxinc.com/forums/default.aspx?f=25&m=251014

    My effort was certainly helped by what turned out to be just basic per-word bit remapping rather than anything more complicated or full encryption, and almost certainly wouldn't have tried if Chip hadn't strongly hinted it was simpler than any of us had imagined.

    I haven't followed the recent discussion of how the protection of Prop II will/may work but one thing to bear in mind that this is for user code not 'unknown code'; if the owner can reprogram a protected Prop II they have access to any source binary ( from all bits clear to all bits set ), the same encrypted in Eeprom, and (possibly) the fuse settings. That's a lot of information to have in determining what the mapping is. It seems to be the hard part in most deciphering is not knowing what the deciphered should produce; if you have control of the source and have the encrypted version that's an easier job.

    Also whatever is chosen has to likely be zero overhead, has to cater for persistent data being read or written to Eeprom at any time without screwing everything up ( or prohibit that ), so likely 'stateless' and possibly constrained to bit mapping and address mapping. That is, static encryption but with the actual mechanism per word dictated by fuse settings and possibly address, even data. We can all propose uncrackable encryption means but there's the hard reality of it having to be practical and implementable.

    If I were trying to determine what's what, I'd start with source binary with one bit set, shift that through memory, see what happens.

    If all this has already been covered / discounted / is incorrect then apologies - I'll have to find some time to read up on all that's been discussed!.
  • rod1963rod1963 Posts: 752
    edited 2012-02-12 16:14
    Looking at Ebay, Electron Microscopes aren't that expensive, maybe as much as a luxury car or SUV.

    And for a outfit doing high end IP theft, it's not much of a cost, since they can recoup it quite fast. Of course for the P2 to attract that level of attention, Parallax is going to need a big design win that gets lots of attention.
  • Cluso99Cluso99 Posts: 18,069
    edited 2012-02-13 16:42
    Further thoughts...
    1. The loader reads in blocks of 512 bytes
    2. The last 4 bytes (32bits) contains a checksum of the block and is validated before the block is decrypted
      1. If the checksum fails, then the b0 of the checksum is inverted and the checksum is revalidated
        1. If the checksum still fails, then the load is terminated because a failure has occurred.
        2. If this checksum passes, then this block is the last block, and after decryption the penultimate long (bytes 504-507) will be used as the start address.
      2. Now, in both cases (normal or start record)...
        1. The 508 byte record (excluding trailing checksum) is decrypted
        2. The first 4 bytes (32bits) contain the hub address for the data to be loaded into
          1. The upper unused data bits are "and"ed off, as are the lowest 2 bits. This means they are don't care bits and can be used to hide the real code. The lower 2 bits are not used because the load will always be long aligned.
          2. If the address is above the hub ram limit, a value is added to cause the address to wrap to a valid address (eg if 192KB hub, then addresses >192KB will have 128KB added i.e. the top bit of the address is inverted)
          3. The 504 data bytes (bytes 4-507) will now be copied to hub ram at the address specified. If this is a START record, then this includes the start address (bytes 504-507).
      3. When the "start" record in encountered...
        1. The length of the load is checked. At least 64 blocks must have been read (i.e. 32KB is a minimum load to prevent simple trial and error attacks).
        2. The penultimate 32bits (bytes 504-507) is used as an address. The same routine is used as the load address (bits anded off, and address wrap) to calculate an address. Then this address will be used to load a Quad-Long from hub into the cog (cog 0) at cog address 0.
        3. The cog will then execute a jmp to cog address 0 where at least the first of these 4 instructions will be executed.
        4. The quadlong actually provides sufficient code to commence an LMM instruction loop, or to now load code into the cog, or whatever the user decides.
          1. In fact, the user can now implement his/her own decryption method on the hub code if desired. This therefore means that the hub code need not necessarily adhere to any statistical analysis of code that could be used in an attempt to break the code.
      4. The download blocks may be out-of-order and may further use overlapping blocks to permit additional fake instructions to prevent statistical analysis of the code.
      5. The START record must be the last record downloaded. Note however, it might only point to 4 longs in hub. It is not a requirement that any cog code continues from these 4 longs in hub, further hiding the real code.
    All of the above is quite simple in pasm, and in no way affects the physical algorithm used. It provides a lot of protection to the user, in addition to the encryption algorithm.

    BTW just because someone breaks a particular piece of code doesn't make it any easier to break the next piece of code, provided the break has not discovered a flaw in the encryption method.
  • pedwardpedward Posts: 1,642
    edited 2012-02-13 17:05
    Here is a solution I proposed:

    ROM contains a SHA-256 hash algorithm.

    The first 504 longs of the EEPROM are a SHA-256 hash of the bootloader+secret key, and the bootloader in clear text.

    The secret key would be 128 bits in length.

    The ROM PASM loads 8 long chunks of the bootloader until all 496 longs are hashed, then it loads the 128bit secret key into the block buffer, appends $80, then pads the very end of the block buffer with the length of the hashed data (496+4)*32 = 16000.

    One final round of the SHA-256 hash is run against this last block, then the resulting 256 bit hash is compared to the first 8 longs of the EEPROM.

    If the hash matches, the 496 longs are loaded into a COG and given control of the chip. The bootloader is signed with the secret key in the chip, so the code is now trusted.

    The bootloader can implement whatever decryption algorithm it wants to, I suggest AES-128 because decryption can be fit entirely into 1 COG without any LMM or tricky stuff. The bootloader has access to the 128 bit secret key and uses this to decrypt the contents of the EEPROM into RAM, just before the bootloader hands control over to the encrypted code, it throws away the key and locks access to the key so that USER mode programs can't read the secret key.

    Since the encrypted data and the bootloader are encrypted and signed with the same secret key, you can avoid signing the payload, since it would decrypt to garbage anyway.

    To field update you simply need to write to the USER area of the EEPROM, leaving the BOOT area alone, which is the first 504 longs.

    This method doesn't have any holes, is very simple to implement, and it's full disclosure. The only way to compromise it would be to guess the 128 bit key or to get someone to sign a backdoored bootloader.

    Backdooring can be avoided by always compiling from known good source when compiling the bootloader and only updating the payload in the field, so if a server were compromised, the bootloader would still be safe.

    The other really nice thing about this is that you can use GPG out of the box to both sign and encrypt the data, no middleware needs to be written to support these algorithms.

    --Perry

    EDIT: fixed some errors, 16/8
  • rod1963rod1963 Posts: 752
    edited 2012-02-13 18:51
    If Parallax is dead serious about encryption and competing in the secure processor market they need to hire a pro for this. It wouldn't look good if they got taken to the woodshed over a badly thought out solution.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-02-13 19:02
    rod1963 wrote:
    If Parallax is dead serious about encryption and competing in the secure processor market they need to hire a pro for this. It wouldn't look good if they got taken to the woodshed over a badly thought out solution.
    I can't agree more. This is no time for "Amateur Night at the Follies." You just can't guess at this stuff. It requires a thorough grounding in discrete mathematics, which only a handful of people possess, to come up with new -- and secure -- encryption methods. 'Better to stick with what's proven than to try inventing something new. Just because an algorithm seems to produce something all jumbled up doesn't mean that it is. But I think Chip learned his lesson with the ease of deciphering his bit-permuted interpreter. At least I hope so.

    -Phil
  • Invent-O-DocInvent-O-Doc Posts: 768
    edited 2012-02-13 19:41
    Isn't the point of the encryption to make it a hassle to try and lift the code moreso than make it impossible?

    Some things are only worth so much effort....

    Encryption approach should be balanced with ease of implementation
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-02-13 19:49
    One other benefit to discussing this here is that Chip gets a better idea of points to challenge someone on to make sure the "professional" he hires is actually a professional.
  • pedwardpedward Posts: 1,642
    edited 2012-02-13 19:50
    I think some people might be confusing an encryption engine with code protection.

    If Parallax was producing a chip that was claimed to be a secure encryption engine, then attention to timing attacks, power attacks, reset attacks, EMI attacks, radiation attacks, etc would be prudent and warrant the consultation of some very well paid experts.

    Parallax is trying to do code protection, which involves making it difficult to recover code. None of the code protection systems are truly foolproof, PIC, AVR, FPGAs, etc. The difference is that their code protection systems are closed and not open for public scrutiny. The P2 isn't even finalized and it's code protection scheme has seen more eyeballs than any of the others!

    The secret to all of this is an open design, full disclosure, without any secrets or tricks.

    With the scheme I outlined, Chip needs to only worry about keeping the key secret during runtime, not the methods or algorithms.

    As a count of hands, who owns "Applied Cryptography"?
  • 4x5n4x5n Posts: 745
    edited 2012-02-13 19:54
    Isn't the point of the encryption to make it a hassle to try and lift the code moreso than make it impossible?

    Some things are only worth so much effort....

    Encryption approach should be balanced with ease of implementation

    Over the years my goal was to make stealing my code to be more work and take longer then development!

    After all if it takes longer and more work to steal something I wrote the writing in house then not many companies will be interested in stealing it!
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-02-13 20:04
    pedward wrote:
    I think some people might be confusing an encryption engine with code protection.
    No, we're not. Encryption is exactly what we're talking about, because the more limited "code protection" applies to code extant in the MCU itself, where it's protected form access, not decipherment. With the Prop, it's all hanging out there in the breeze in EEPROM, where anyone can examine it, so it needs to be encrypted, not protected.

    I don't own a copy of Applied Cryptography. I have Steven Levy's Crypto, Simson Garfinkel's PGP: Pretty Good Privacy, Simon Singh's The Code Book, and Peter Wayner's Disappearing Cryptography. I've read them all, and I don't have a working knowledge of any of them. Moreover, anyone in this forum who claims to be an expert in cryptography is deluding himself.

    -Phil
  • Heater.Heater. Posts: 21,230
    edited 2012-02-13 21:09
    Phil is right. In the old days, and probably now, your code lived in on chip PROM or FLASH and protection was just a case of blowing a fuse that prevented reading of that store and probably writing again. No encryption required. The protection was just the difficulty of opening the chip and reading the PROM.
    With the Prop its just a case of reading the external EEPROM into your PC and applying its power and that of any other machines you can muster to cracking the code.
    Plus in many such cases once one instance has been cracked the exploit is circulated world wide and every other instance becomes trivial overnight.
    So the encryption is crucial.
  • pedwardpedward Posts: 1,642
    edited 2012-02-13 21:27
    Attached is an AES-128 encrypted binary of a compiled PASM file. I simply clicked "save as binary" in BST and encrypted it with "openssl enc -aes-128-cbc -in sha-256.binary -out sha-256.enc"

    The code is 193 longs of a SHA-256 hashing implementation.

    I'm posting the code publicly, used a very simple key.

    I'm trying to point out how ridiculous the argument is: "you need an expert, the lay person can't design something secure".

    If you use proven algorithms and implementations that are publicly accepted, how are you going to steal it? The fact is that you would need to recover the key from the chip. The vulnerability is in key recovery from the chip, not any of the ideas I've put forth.

    You can dissect an EEPROM all day and not get anywhere, the code is out there for everyone, and it's secure.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2012-02-13 21:51
    pedward wrote:
    I'm trying to point out how ridiculous the argument is: "you need an expert, the lay person can't design something secure".
    Yet your example just reinforces my point: you're merely implementing a technique that was designed by experts in the field, rather than designing one of your own. It's the latter that I claim would be foolish.

    -Phil
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-02-13 22:44
    Using AES is one thing. Creating a method to use it that can't be attacked easily is another. That's why it's worth hiring a consultant. Security needs to be considered from a whole-system standpoint. I don't pretend to be an expert, but this is a "the more you know, the more you know you don't know" type situation.
  • Cluso99Cluso99 Posts: 18,069
    edited 2012-02-14 00:39
    As far as I understand, only 64 bits are available.

    pedward: IMHO a short 504 byte field wold be much simpler to decipher by brute force than a large block of code.

    I agree that a professional and proven algorithm is preferred.
  • BatangBatang Posts: 234
    edited 2012-02-14 02:29
    The code is 193 longs of a SHA-256 hashing implementation.

    Just like to point out that a Hash function is one way only i.e. you can't just un-hash it!

    Cheers.
  • Heater.Heater. Posts: 21,230
    edited 2012-02-14 02:32
    pedward
    If you use proven algorithms and implementations that are publicly accepted, how
    are you going to steal it? The fact is that you would need to recover the key
    from the chip. The vulnerability is in key recovery from the chip, not any of
    the ideas I've put forth.

    Not so fast. The history of this field is littered with examples of
    unbreakable systems that used pretty good algorithms but were none the less
    broken due to oversights in implementation or use. Such confidence can lead to
    carelessness. Paranoia is the best approach.

    The famous war time Enigma system had a pretty good algorithm for it's day but
    look what happened to that. From wikipedia:
    Although Enigma had some cryptographic weaknesses, in practice it was only in
    combination with procedural flaws, operator mistakes, captured key tables and
    hardware, that Allied cryptanalysts were able to be so successful"

    In recent times WEP security used in WIFI used a pretty good algorithm (RC4)
    which suffered from a bungled implementation. Again from wikipedia:
    Because RC4 is a stream cipher, the same traffic key must never be used twice.
    The purpose of an IV, which is transmitted as plain text, is to prevent any
    repetition, but a 24-bit IV is not long enough to ensure this on a busy network.
    The way the IV was used also opened WEP to a related key attack. For a 24-bit
    IV, there is a 50% probability the same IV will repeat after 5000 packets.

    Currently we have attacks going on against the Trusted Platform Module (TPM)an
    embedded cryptographic device, whose spec was designed by the Trusted Computing
    Group (TCG). http://www.cs.dartmouth.edu/~pkilab/sparks/ and cold boot attacks
    http://en.wikipedia.org/wiki/Cold_boot_attack

    A long while back I worked on a secure communications system for the military,
    a whole team of guys from GCHQ were all over every line of code, every
    circuit design, the PCB and mechanical construction looking for such
    weaknesses.

    In short the algorithms may be solid the implementation, on the Prop, has never been
    done before. It pays not to be so cock sure.
  • LeonLeon Posts: 7,620
    edited 2012-02-14 02:42
    Was that Pritchel? We had to include that in our Bowman radios when I worked for Racal. I had to stand in for my boss once (who knew all about such things, but my field was human factors) and give a very important presentation to a large number of MoD and military people on how the key distribution system worked. Understanding it and explaining it clearly wasn't easy. Later, during verification, I had to demonstrate to the MoD that opening the radio case erased the keys, before a probe could be inserted.
  • Heater.Heater. Posts: 21,230
    edited 2012-02-14 04:53
    You know Leon, I think they still have the death penalty in England for discussing such things!
  • LeonLeon Posts: 7,620
    edited 2012-02-14 05:09
    Bowman and Pritchel are in the public domain:

    http://www.google.co.uk/search?q=bowman+pritchel&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-GB:official&client=firefox-a

    Not the details, of course.

    A pritchel is a pointy thing used by a farrier or blacksmith to make the holes in horse shoes, BTW. It fits into a hole in the anvil.
  • Heater.Heater. Posts: 21,230
    edited 2012-02-14 05:35
    Blimey, so that's why the square ends of anvils have a hole in them. Never knew that.
  • LeonLeon Posts: 7,620
    edited 2012-02-14 05:38
    Yes, that's what it's for. It's in most dictionaries. I'd never heard of the word before I joined Racal.
  • Heater.Heater. Posts: 21,230
    edited 2012-02-14 05:44
    Leon,

    Seems that Bowman and Pritchelproject was a total disaster.
    The one I worked on was some years before that for use with clansman radios. No mention of it oon the net anywhere I can find.
  • GadgetmanGadgetman Posts: 2,436
    edited 2012-02-14 05:54
    To be specific about the Enigma...

    They used a new set of codes each day(listed in books) for initial setup of the wheels and cross-connects.
    This was the same for everyone with the same book of course.
    This was pretty much unbreakable unless you had access to the same model machine and code book.
    (The Luftwaffe and Criegsmarine used different versions, Italy had at least one model, too)
    In fact, Enigmas were in use for decades after the war.

    What the germans did wrong was completely procedural.
    The first part of every message sent was a random(made up on the spot by the sender)'offset' to be applied to the daily code. And since the germans were stickler for details, they insisted that the offset code was sent twice, at the beginning of the message.
    This was a procedure made by the military, not by the designers of the machine...
    (The road to Hell is paved with good intentions... )

    The flaw was spotted by Polish agents before the outbreak of WWII, and they actually started construction of the first 'bombe' to exploit it, but they never had the time.
    Luckily, they managed to send the material to the french, who didn't really understand the importance or significance of it, but passed it on to the british.
    (A bit of a fufu by the french, considering that they and the italians once were the premier codebreakers of the world.)
  • LeonLeon Posts: 7,620
    edited 2012-02-14 07:38
    Heater. wrote: »
    Leon,

    Seems that Bowman and Pritchelproject was a total disaster.
    The one I worked on was some years before that for use with clansman radios. No mention of it oon the net anywhere I can find.

    I never found out why they rejected the two bids. I think that ours was the best. The two consortia then joined forces, but the MoD didn't go for that, and ended up buying a system from Lockheed-Martin. It doesn't seem to be very popular with the actual people that have to use it.
  • JavalinJavalin Posts: 892
    edited 2012-02-14 08:08
    Better Off With Map And Nokia. Apparently.
  • Heater.Heater. Posts: 21,230
    edited 2012-02-14 08:44
    Javalin,
    I noticed that, made me laugh like hell. What with having worked for both Racal and Nokia and still here in the land of Nokia.
    Then my coworker a Finnish guy told how when he was in the army they said "if your radio does not work there is always Nokia" but in this case they meant your Nokia army boots!
  • pedwardpedward Posts: 1,642
    edited 2012-02-14 12:31
    No matter how you slice it, you cannot have a truly secure system when the key is known to the device. You need only to look at recent technologies that rely on this principle for proof: Blue-Ray, PS3, HDCP

    My whole point entirely is that the weakness lays in the chip, not the procedures or algorithms I've detailed. If the secret key can be recovered from the chip, you lose. The methods I proposed are almost exactly how existing cryptography works.

    Standard PKC uses a very large pair of congruent prime numbers to encrypt and decrypt a secret key that is used as the key for a stream cipher.

    Inversely, PKC enables the users to sign a message with their secret key and validate the signature with the public key.

    If you take a step back from PKC, you use a hash function and a salt to do the same thing.

    If I have a secret key and I append it to a message, then I hash the message and the secret key, the hash result is unique because the secret key added a deterministic variability to the message.

    You then transmit your message (sans key) with the signature hash, on the receiving end they take the message, append the shared key to it, hash it, then compare the results. If the message is authentic, the hashes match.

    These are the basic premises of digital signatures and hashing, and they are accepted practices that have known weaknesses. The weakness is the quality of the key and the secrecy of the key.

    Salting a message with a secret value is how you are supposed to use hashes, simply hashing the message doesn't do anything to prevent tampering. You will often see packages with their hashes posted on a website so you can check the hash locally and compare the two. This is a way of authenticating the package, but it only works if the website isn't tampered with. There have been a number of high profile attacks in recent years which invalidate this process because it's only as secure as the web server where the signature is publicly posted.

    So, the point of all of this is that the P2 can't be invulnerable, only difficult enough to exclude everything but the really really expensive and laborious methods. The only organizations with those resource won't be looking for the source code to a P2 program, they would be attacking the chip itself, and since it's a cheap chip there is no reason to attack it.

    By and far, the biggest weakness of the P2 code protection is the human being that is programming it. A typical 8 character password only has 48 bits of quality, which is within the realm of rainbow tables. To be truly high quality, the developer will need to use a high quality key generation program and store the key in a cryptographic keyring, using a less secure human passphrase to access the keyring.

    If you generate a 64-128 bit key with a quality key generation program, you have eliminated the key as a quality factor in the entire system.

    I'm trying to shift everyone's focus from "code protection" to "code authentication", because the first step to protecting anything is authenticating the parties. Whether the P2 implements encryption is irrelevant, what is relevant is code authentication. The P2 has a huge thing going for it in this area, the key is not fixed for every P2 or every P2 product. The key is settable on a per device family, etc, which significantly reduces the risk associated with systems like the PS3 or Blue-Ray, etc. Those systems rely on key revocation and storing the secret keys en-mass in every device.

    In the P2, the secret key lives in 2 places, the silicon and the developer's computer. I will put a wager on the computer being less secure and that's why I suggested the use of a key-ring to secure the secret keys from theft.
Sign In or Register to comment.