Shop OBEX P1 Docs P2 Docs Learn Events
The New 16-Cog, 512KB, 64 analog I/O Propeller Chip - Page 70 — Parallax Forums

The New 16-Cog, 512KB, 64 analog I/O Propeller Chip

16768707273144

Comments

  • potatoheadpotatohead Posts: 10,259
    edited 2014-05-12 19:02
    Assembly language programmers need to understand self-modify as part of the whole experience. Data as code, code as data, etc...

    http://resources.infosecinstitute.com/writing-self-modifying-code-part-1/

    If you really want to know how things work, you do it in assembly language, no holds barred. These are the people who can create, crack, modify copy protection, they are the people who write the "voodoo" many higher level languages depend on, can boot strap their tools onto just about anything, and I could go on.

    http://research.cs.wisc.edu/wisa/papers/acsac05/GCK05.pdf

    (and BTW, that one appears to be a great read for later)

    http://en.wikipedia.org/wiki/Self-modifying_code

    Here's the thing. I'm not a big fan of limiting exposure to core concepts like this. People can take from it what they need or want to. The little kid who gets exposed to this kind of thing on a Propeller might go on to do great security research, for example. Remember Geohot? Bet your Smile he knows what self-modifying code is, and a whole lot of other fundamental concepts are too.

    From there, we can always talk about good assembly language programs. Those really do vary. On some limited machines, such as a Propeller, or an old 6502, even a 286/386, self-modifying code can make the difference between something being possible or not. Where the scope of the hardware is more broad, it's not likely to be needed or encouraged.

    ...until that expensive software license breaks for no good reason. Guess what? That was my first motivation to learn about such things long ago. Had a disk go bad, 30 bucks or so in 1984 dollars down the drain. Did I go buy another? Hell no. I copied one, and it didn't work, and I spent a month on an Apple ][ understanding why, then I fixed it, and I still have it, and it still works, because I can make sure it works, just like I paid for. Went on to do the same with an Atari, then the PC, and even today there are some licensing schemes that... let's say they don't pose much of a problem for me and leave it at that.

    Say I've an old device, or I want to combine something with something else in ways not intended. All sorts of tricks might be needed.

    These are all good reasons to be completely exposed to assembly language, not just some partial set. And while we are at it, they should get just a brush of machine language too. You know, think up the program, look up the opcodes, hand assemble it, simulate it on paper, and type the darn thing in. Never hurts, and it's a lot like knowing how to perform calculations on paper. We don't do it every day, but when we need to do it, we've seen it and have some competency to work from.

    There is a whole Demoscene celebrating this kind of thing. People who want to know how it really works, not how they are told it works, or how it should work.

    Your answer is generally "no", unless the task is within the scope of assembly language programs in general, where a higher level language or OS isn't so much of a consideration. In that case then the answer may well be yes, and it's probably a tech demo, or somebody pushing the hardware places it was not intended to go, etc... All good fun. The more the merrier!

    Heck, look at the modern Javascript! That thing allows high level self-modify, as does the ancient FORTH.

    http://www.i-programmer.info/programming/javascript/989-javascript-jems-self-modifying-code.html

    Spiffy!
  • David BetzDavid Betz Posts: 14,511
    edited 2014-05-12 19:12
    potatohead wrote: »
    Assembly language programmers need to understand self-modify as part of the whole experience. Data as code, code as data, etc...

    http://resources.infosecinstitute.com/writing-self-modifying-code-part-1/

    If you really want to know how things work, you do it in assembly language, no holds barred. These are the people who can create, crack, modify copy protection, they are the people who write the "voodoo" many higher level languages depend on, can boot strap their tools onto just about anything, and I could go on.

    http://research.cs.wisc.edu/wisa/papers/acsac05/GCK05.pdf

    (and BTW, that one appears to be a great read for later)

    http://en.wikipedia.org/wiki/Self-modifying_code

    Here's the thing. I'm not a big fan of limiting exposure to core concepts like this. People can take from it what they need or want to. The little kid who gets exposed to this kind of thing on a Propeller might go on to do great security research, for example. Remember Geohot? Bet your Smile he knows what self-modifying code is, and a whole lot of other fundamental concepts are too.

    From there, we can always talk about good assembly language programs. Those really do vary. On some limited machines, such as a Propeller, or an old 6502, even a 286/386, self-modifying code can make the difference between something being possible or not. Where the scope of the hardware is more broad, it's not likely to be needed or encouraged.

    ...until that expensive software license breaks for no good reason. Guess what? That was my first motivation to learn about such things long ago. Had a disk go bad, 30 bucks or so in 1984 dollars down the drain. Did I go buy another? Hell no. I copied one, and it didn't work, and I spent a month on an Apple ][ understanding why, then I fixed it, and I still have it, and it still works, because I can make sure it works, just like I paid for. Went on to do the same with an Atari, then the PC, and even today there are some licensing schemes that... let's say they don't pose much of a problem for me and leave it at that.

    Say I've an old device, or I want to combine something with something else in ways not intended. All sorts of tricks might be needed.

    These are all good reasons to be completely exposed to assembly language, not just some partial set. And while we are at it, they should get just a brush of machine language too. You know, think up the program, look up the opcodes, hand assemble it, simulate it on paper, and type the darn thing in. Never hurts, and it's a lot like knowing how to perform calculations on paper. We don't do it every day, but when we need to do it, we've seen it and have some competency to work from.

    There is a whole Demoscene celebrating this kind of thing. People who want to know how it really works, not how they are told it works, or how it should work.

    Your answer is generally "no", unless the task is within the scope of assembly language programs in general, then the answer may well be yes, and it's probably a tech demo, or somebody pushing the hardware places it was not intended to go, etc... All good fun. The more the merrier!
    It may be that self-modifying code is a useful tool that can be used to good effect in very limited circumstances. However, I don't think learning assembly language on a processor that requires self-modifying code for even simple operations like subroutine calling is necessarily a good idea. Heck, even Chip ran into trouble when he added hub execution which made self-modifying code difficult such that he added a hardware stack to avoid having to do it. The original P2 also had additional instructions to avoid having to use self-modifying code for indexing COG memory. If it's such a great tool, why go to the trouble to modify the architecture to make it less necessary?
  • potatoheadpotatohead Posts: 10,259
    edited 2014-05-12 19:22
    Again, it's about learning what assembly language is. PASM is really easy. I've helped a few people learn assembly language and the P1 is by far the easiest, save for some very old CPU's, which are also pretty easy.

    Take somebody skilled in PASM, and give them an instruction set with an index, and they will put it to use just fine. Often, one can do the reverse, which is precisely what a lot of our experiences were.

    At the assembly language level, few things are disallowed. We are close to the hardware and that means exercising it in ways that get stuff done. If there is a higher level context, what makes sense changes.

    I didn't say that a Propeller is a great device for learning how to write well structured, high level language compliant assembly language. I said it's a great device to learn assembly language. Those are two entirely different things.
  • David BetzDavid Betz Posts: 14,511
    edited 2014-05-12 19:27
    potatohead wrote: »
    I didn't say that a Propeller is a great device for learning how to write well structured, high level language compliant assembly language. I said it's a great device to learn assembly language. Those are two entirely different things.
    Probably so. I didn't really mean to say it was a bad processor for learning assembly language as much as that it is not necessarily an ideal one for that purpose since it requires the understanding of concepts like self-modifying code that are seldom used in mainstream programming but rather, if used at all, relegated to specialized code that needs to squeeze the maximum performance out of the hardware at the expense of being more difficult to understand and maintain. It shouldn't be the first tool to be grabbed from the toolbox in my opinion.
  • jmgjmg Posts: 15,155
    edited 2014-05-12 19:51
    David Betz wrote: »
    Edit: Of course, you *can* write self-modifying code for many if not most processors. I guess what I'm really asking is if there are any other modern processors that require the use of self-modifying code for critical operations like calling subroutines or indexing arrays.

    I do not know of any Flash based embedded controllers that even suggest self-modifying code.
    Some do allow code to run out of RAM, but require the use of self-modifying code is certainly not in any mainstream Microcontroller.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-12 21:48
    David,
    Can you give me one example of any modern processor that makes use of it other than the Propeller?

    Yes, your PC's x86.
    Every time your PC gets rooted, owned by some virus, trojan, whatever they want to call them now a days, there may well be someone making use of self modifying code. Injected via some buffer overrun or such bug in you application of OS.

    Sounds like every programmer should be made painfully aware of self modifying code so they can defend themselves in future.

    The Propeller is the only processor I know that executes instructions from it's own register set (and from nowhere else).

    I love all this Propeller weirdness :)

    Not sure why they don't want a more modern language but C is what they seem to want.

    I'm curious. What more modern language do you have in mind? There are precious few options in MCU land.
  • rod1963rod1963 Posts: 752
    edited 2014-05-12 23:40
    I have to agree with Koehler, his observations 1-4 makes sense as to why things are the way they are, Parallax isn't really interested in expanding their market share and this would explain certain decisions they made quite well.



    Heater

    Yep, outside of writing malicious viruses writing there generally isn't a need for self-modifying code anymore. I learned how to write viruses when I was studying the structure of polymorphic viruses when the 286-386 were popular and never used it since then.

    And this oldie but goodie - CoreWars. That's the only other place I used it.

    Microsoft used it to screw with other DOS vendors and was discovered by Schulman. Microsoft being evil, well that's par for the course.

    The fact that those programming Cogs have to play this game just indicates a architectural limitation, not a feature. It's a throwback to a very early age of computer programming.

    Would I use the Prop to teach 32bit assembly language? No, there are better choices available.


    .
  • potatoheadpotatohead Posts: 10,259
    edited 2014-05-12 23:55
    Here's an analogy. Word processing as opposed to understanding Microsoft Word. I remember the discussions on this, and they were along the lines of, "I don't think it's a great idea for somebody to learn on X, because it's not Microsoft Word."

    To which I replied, "There is learning Microsoft Word, and there is learning what word processing actually is. Two different things."

    Somebody who has learned Microsoft Word as their word processor, thinks of word processing in fairly strict terms defined by what and how Microsoft Word actually does the task. One could take various approaches in Word Perfect, and Open Office, among others. Each of those programs has some merits over the others. And that is where the magic is. Once somebody actually gets exposed to more than one word processor, they begin to see word processing itself as a thing and the programs available to them as tools to accomplish that thing rather than seeing the tool as the means to the end.

    Most, if not all of us, have been through little things like this, of course. Ever mentor kids who have not? It's quite interesting! I once fired up Apple Works, just for fun, to show what word processing was. On that computer, limited as it is, the task gets broken down in to bits compared to the all in one mega suites we run all the time now. And underneath the hood, there is all manner of things going on! Tabs, formatting characters, pagination, spell checking, etc...

    Microsoft Word has a mode for those people who actually understand word processing, and it's the show all, or show formatting option. Something like that. In that mode, it shows every character, and for a lot of people, it's an eye opener. In fact, exploring that some can explain all sorts of odd behavior seen in Microsoft Word and other word processors. Things like beginning to type new words that were not there before, yet they appear in bold without any prompting.

    Ever have that happen? Of course. The why has to do with the formatting characters and where they are and what a user happened to do within the context of them. Until a person sees that, it's all kind of fiddly. We have format painter for them.

    Regarding assembly language and it's sibling machine language, having some exposure to the weirdness is good! In fact, it's great! It's great because the learning happens to come easy in PASM. And that is because PASM is kind of beautiful really. Work of art, if you ask me. Back in the day, I would say a similar thing about the 6809. That one is beautiful too, and it's also kind of easy in a similar way.

    Once that learning has happened, we have people counting cycles, writing self-modifying code, thinking hard about data as code and code as data, what is a register really, and all manner of basic things common to the task of assembly language. And they get stuff done, and it's fast, and on a P1, nearly always works too.

    From there, they will be in the "Assembly Language is PASM" camp, just like the "Microsoft Word is word processing camp", and if that's the only place they go, fine. They can use the chip well. However, if they branch out and explore a little, all those concepts play out on the new design, and it's hard, and they adjust and then they know what assembly and machine language really is, and that's the goal.

    People who gain that kind of understanding also don't gain limits to how they employ it. That too is the goal.

    In the 80's, I wrote self-modifying code on the 6502 all the time. Why? Because the damn thing wasn't a 6809! PITA, that was. No big index registers, no multiply, no big stack, etc... Loved the 6809, and it was a lot easier. It also brought concepts to my mind like relocatable and reentrant assembly language programs. Doing that is cake on a 6809. It's a bit of a challenge on a 6502. For a real mind bender, self-modify copy protect routines on the 6809 are all kinds of fun. You learn a ton debugging those away. (Copying game carts by making cassette or disk images, firing up the tools and sorting it all out. Hey, I was a poor kid! Had a good time of it all though. No regrets.)

    Just knowing that means somebody understands what assembly is and they have that basic grasp of the role of the CPU, hardware, etc... and that's the goal. Once we have them there, it's an excellent foundation for understanding what happens in the layers above or beside.

    Say you've got a purpose for the machine. Maybe it's a spiffy demo. Maybe it's a very dedicated task. Do you need the OS? Of course not. Do you even care about a lot of things related to the OS? No. Write your program and own the machine. It's yours. Max it out. That's what it's there for.

    Now, a layer up, we find out OSes make a lot of sense, particularly when one wants to do more than one specific task on a machine. Same thing is true for libraries, and the usual things we see every day. There are lots of programmers out there who know what those things will do, and that's great. They don't care about the voodoo underneath and they don't have to either.

    But, some of the programmers do care. And if they get some exposure to assembly and machine language, things like "buffer overrun" begin to make a whole lot more sense above and beyond a simple rule or use case avoidance normally associated with that kind of thing.

    I'll admit I laughed out loud when I first started to produce some COG index code. Self-modify? Oh, the horrors! But then thinking about it some on a Propeller, it made a lot of sense! The code gets copied from the HUB to the COG, and the COG can do whatever it wants without having to worry about the other COGS. Perfect! Other CPU's rely on an interrupt, and self-modifying code in that context is a bit different animal, or it's just not possible and other means need to be used.

    Funny, I hear a lot about how slow the P1 is for lack of indexed addressing in COG RAM. Well, isn't it interesting how on a lot of other CPU's, adding an index also adds cycles? Often, it's a similar number of cycles when compared to the extra coupla instructions needed to get it done on a P1. Slow? No. Just different, and somebody who understands assembly language and who has seen a few different variants gets that. No worries.

    Anyway, just some context for the statement above. It's important that we don't structure things for people too much. Yes, a few will get lost, but some others will do amazing and interesting things. Chip did, and he went to the same school I'm describing right along with many others, and it's a perfectly fine school, if a bit unorthodox.

    Lots of great assembly language programmers learned on a few different CPU's, and each one they learned added a lot to how they think of things and why they do them. Kicking this all off on an easy CPU is just great! It's easy, fun, accessable, and from there, they are off and running to whatever devices make sense.

    The thing I like the most is some understanding of why and how, not just that a self-modify routine is bad. Truth is, depending on the task and device, it might not be bad at all! So then, better for people to be in a position to grok that and do what they will instead of avoiding something, "because it's bad"
  • kubakuba Posts: 94
    edited 2014-05-22 13:05
    I know that it probably has been mentioned before, but I really see no reason at all for the Propeller to need to support the big code model using what amounts to a virtual machine. Throwing in an off-the-shelf ARM core to do all that would be dead simple, require very little NRE investment, and solve the problem of running "big" C code. A Cortex M0+ or M3 core would do nicely. There are already good open source tools that support all that. All it takes is an off-the-shelf IP and a tiny bit of glue to attach it to the Prop side of things.

    A 16 COG Propeller optimized for I/O tasks, with an ARM thrown in, would satisfy my needs for quite a while. Essentially the P16X64A with a small ARM core would be it. The only reason I use XMOS is that it's the only thing out there that can pull off EtherCat in software, and I do need it in software, since I do math on the packets as they fly by. As long as the Prop could do High Speed USB 2.0 (UTMI) or 100 MBit Ethernet (MII, maybe RMII) on each COG, I'd be all set. That's why some XMOS-like port machinery, here called smart pin, is needed. I personally don't see much reason to do any sort of a direct attachment to either USB or Ethernet, there are PHY chips for that and reimplementing that hardware to be compliant would be a colossal waste of time, too. Doing non-compliant "hacked" USB or Ethernet in a professional/industrial product won't fly, I don't think.
  • jmgjmg Posts: 15,155
    edited 2014-05-22 13:42
    kuba wrote: »
    ... The only reason I use XMOS is that it's the only thing out there that can pull off EtherCat in software, and I do need it in software, since I do math on the packets as they fly by. As long as the Prop could do High Speed USB 2.0 (UTMI) or 100 MBit Ethernet (MII, maybe RMII) on each COG, I'd be all set. That's why some XMOS-like port machinery, here called smart pin, is needed. I personally don't see much reason to do any sort of a direct attachment to either USB or Ethernet, there are PHY chips for that and reimplementing that hardware to be compliant would be a colossal waste of time, too.

    This sounds very interesting.

    Can you start another thread, with some numbers on the Opcode Cycles you need to "do math on the packets as they fly by" and the BUS widths / strobes needed for USB 2.0 (UTMI) &100 MBit Ethernet (MII, RMII)
    and any part numbers for PHY side chips.

    I see FTDI have FT313H, which has parallel interface and does much more than just HS-PHY, for $2.47/100+
    & Microchip have USB3450 UTMI+ (60MHz ClkOUT) for 94c/100+ & looks to need 60MHz burst streaming.
  • kubakuba Posts: 94
    edited 2014-05-22 13:52
    jmg wrote: »
    I see FTDI have FT313H, which has parallel interface and does much more than just HS-PHY, for $2.47/100+
    & Microchip have USB3450 UTMI+ (60MHz ClkOUT) for 94c/100+ & looks to need 60MHz burst streaming.
    The devices like FT313H do both more and less than a HS-PHY. They do more by providing extra functionality, but they also hide some functionality that is only available if you can talk directly to a PHY. So, for example, it'd be impossible to do a USB packet analyzer using an FT313H, but using any of the XMOS chips (and I do mean any) it's quite trivial.
  • Roy ElthamRoy Eltham Posts: 2,996
    edited 2014-05-22 13:56
    An ARM core in a Propeller is just RIGHT OUT! Seriously, I can not see Chip doing that...

    Also, how would the arm core be able to realistically interface with the Prop side in a way that would be acceptable at all?
  • ElectrodudeElectrodude Posts: 1,637
    edited 2014-05-22 13:57
    kuba wrote: »
    All it takes is an off-the-shelf IP and a tiny bit of glue to attach it to the Prop side of things.

    Parallax doesn't want to have to deal with IP problems. An ARM will never end up being part of the propeller not only because of IP problems, but also because it would create a chimera. People would end up only using the ARM core and never use any of the Propeller cogs. Then people would realize that they might as well just use an ARM, and they would all switch to that, and nobody would use the propeller.
  • Heater.Heater. Posts: 21,230
    edited 2014-05-22 14:10
    Electrodude,
    Then people would realize that they might as well just use an ARM, and they would all switch to that, and nobody would use the propeller.

    But...but...an oft heard complaint from Raspberry Pi users, for example, is that they cannot wiggle I/O pins in any fast real-time way. The Linux OS is really not built for that and even if it was doing that would suck down performance.

    This is obviously a commonly perceived problem. See how TI is adding little real-time processors to it's ARM SoCs. The PRU's. See how Xilinx has added ARM and FPGA in a SoC.

    Why not ARM and super easy to use Propeller COGs?

    Roy, is right though, I don't see it happening.
  • jmgjmg Posts: 15,155
    edited 2014-05-22 14:15
    kuba wrote: »
    The devices like FT313H do both more and less than a HS-PHY. They do more by providing extra functionality, but they also hide some functionality that is only available if you can talk directly to a PHY. So, for example, it'd be impossible to do a USB packet analyzer using an FT313H, but using any of the XMOS chips (and I do mean any) it's quite trivial.

    Sure, but the FT313H is easier to talk to, so there are trade offs. It is not clear if a P1+ will even talk UTMI.

    All this useful detail is why I suggested you start a thread on what is needed for 'on the fly' type designs.
    Can XMOS still run at 100MOPs with a 60MHz CLK coming from the slave interface, or does it become 60 MOPs ?

    A P1+ PLL ability to lock to 60MHz in would look to help here (Q:can it do that ?)
    which would give choices of 120MHz sysCLK, or 180MHz SysClk.
    Chip has the FIFOs working on fSys/N DMA to Hub, so a single HW-gated DMA (PLL assumed) could capture @ 60MHz
  • Cluso99Cluso99 Posts: 18,069
    edited 2014-05-22 14:17
    kuba,
    Interesting concept.

    I presume you are only snooping the USB/Ethernet?
    Therefore, some of the heavy lifting such as CRC checking is not required?
    Curious because the P1 can run at 96MHz when overclocked, so USB FS can be read but you cannot do anything with it due to timing.
  • Todd MarshallTodd Marshall Posts: 89
    edited 2014-05-22 17:39
    Parallax doesn't want to have to deal with IP problems. An ARM will never end up being part of the propeller not only because of IP problems, but also because it would create a chimera. People would end up only using the ARM core and never use any of the Propeller cogs. Then people would realize that they might as well just use an ARM, and they would all switch to that, and nobody would use the propeller.
    I wonder if they got that same argument when they started putting them in FPGAs?
  • ElectrodudeElectrodude Posts: 1,637
    edited 2014-05-22 17:59
    I wonder if they got that same argument when they started putting them in FPGAs?
    The P2 is only emulated in FPGAs for testing purposes. FPGAs and ICs can be designed in the same language, Verilog, so Chip took advantage of that to test his chip so he didn't have to do a fab test run every two weeks, which would be ridiculously and prohibitively expensive (and the fabs don't do test runs nearly that often, anyway). Once the real, physical P2 comes out, nobody will emulate them on FPGAs anymore. Also, there won't be an IP-encumbered FPGA in the final P2, the same way there will never be an IP-encumbered ARM as part of a Propeller. Testing is a whole different story - it's only testing, not the real thing. Altera doesn't gain ownership of your design just because you ran it on one of their FPGAs once, but ARM Holdings or whoever Parallax would get their off-the-shelf IP from would have ownership of part of the final, physical P2 if an ARM were to be included in the P2.

    Putting a propeller in an IP'd chip and putting an IP'd chip in a Propeller are two completely different things. You can't use the same argument to justify or condemn both of them together.
  • jmgjmg Posts: 15,155
    edited 2014-05-22 18:11
    ... Once the real, physical P2 comes out, nobody will emulate them on FPGAs anymore. ..

    I would not be too sure on that, Parallax are planning a FPGA board themselves*, and Boards like a Cyclone V BEMICRO CV come for $34.80 from Verical.
    Once the new FPGA builds are out, we will know how many new COGs can fit into the various FPGA offerings, but the new respin is likely to be more FPGA matched - faster in FPGA and more COGs in smaller FPGAs.

    * Parallax may be able to drop the size/price of the selected Cyclone V FPGA on their FPGA board with the new design.

    .
  • ElectrodudeElectrodude Posts: 1,637
    edited 2014-05-22 18:45
    jmg wrote: »
    I would not be too sure on that, Parallax are planning a FPGA board themselves*, and Boards like a Cyclone V BEMICRO CV come for $34.80 from Verical.
    Once the new FPGA builds are out, we will know how many new COGs can fit into the various FPGA offerings, but the new respin is likely to be more FPGA matched - faster in FPGA and more COGs in smaller FPGAs.

    * Parallax may be able to drop the size/price of the selected Cyclone V FPGA on their FPGA board with the new design.

    I thought that FPGA board was just to help people help Chip design the P3? Why would anyone want emulate a chip on a $35 board if you could just buy the physical chip, which would be way faster and way more efficient, for only $20 (including support circuitry)?

    Either way, neither an ARM core nor an FPGA core will ever end up inside a propeller.
  • markmark Posts: 252
    edited 2014-05-22 18:57
    I understand the desire to execute big code efficiently, what I don't understand is the fascination with ARM. I'm confident Chip has the skills to pull off his own Harvard core design, so I hope that's the route he would go if he decided to ever produce such a chip.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2014-05-22 19:15
    markaeric wrote:
    I'm confident Chip has the skills to pull off his own Harvard core design ...
    That would be a radical departure from the current Propeller architecture, which is Von Neumann all the way -- including code in registers and self-modifiying code.

    -Phil
  • markmark Posts: 252
    edited 2014-05-22 19:26
    That would be a radical departure from the current Propeller architecture, which is Von Neumann all the way -- including code in registers and self-modifiying code.

    -Phil

    I wasn't implying that Chip should make an entirely Harvard architecture Propeller, but rather a chip with Von Neumann cogs as we have "now" for soft peripherals and whatnot, and one Harvard architecture core for big code (instead of an ARM core that has been suggested over and over).

    And just to be clear, I'm in no way suggesting that this be done for the P1+/P2/watchamacallit (if ever) :lol:
  • Todd MarshallTodd Marshall Posts: 89
    edited 2014-05-23 05:05
    [QUOTE=mark
  • RamonRamon Posts: 484
    edited 2014-08-15 22:44
    Hi, it looks like everybody is playing with the new verilog toy and there is no one who cares about the new propeller chip. Is this plan still alive?
  • potatoheadpotatohead Posts: 10,259
    edited 2014-08-15 22:49
    Yes. No worries.
  • Heater.Heater. Posts: 21,230
    edited 2014-08-16 11:44
    Phil,
    Von Neumann all the way -- including code in registers and self-modifiying code.
    Having read about the "Von Neumann architecture" for a few decades now I have never heard that "code in registers" or "self-modifiying code" was part of the deal.

    Do you have any references to such
    Von Neumann architecture definitions?


  • TubularTubular Posts: 4,646
    edited 2014-08-16 19:55
    Ramon wrote: »
    Hi, it looks like everybody is playing with the new verilog toy and there is no one who cares about the new propeller chip. Is this plan still alive?

    I think it just appears that way because it's been a while since a P2 image. I suspect a few besides myself are still working with P2. Its very nice to work with.
  • kwinnkwinn Posts: 8,697
    edited 2014-08-18 06:28
    Heater. wrote: »
    Phil,
    Having read about the Von Neumann architecture for a few decades now I have never heard that code in registers or self-modifiying code was part of the deal.

    Do you have any references to such Von Neumann architecture definitions?

    The Propeller is the only chip I have come across that has code in registers and generally treats memory and registers as equivalent. IIRC there was a minicomputer that used main memory to store registers.

    Self modifying code is implicitly part of a Von Neumann architecture even though it is not an explicit part of the definition. Since code and data are sharing memory self modifying code is possible even without specific instructions for that.
  • LeonLeon Posts: 7,620
    edited 2014-08-18 06:52
    TI 16-bit CPUs such as the 9995 had a "memory-to-memory" architecture, with registers in main memory. It had several advantages over other devices available at the time, such as fast and efficient interrupt handling by switching to a different set of registers. They were based on a previous mini-computer design. I did quite a big job with the 9995 (in assembler), and rather liked it. To save money (the TI assembler was very expensive) I wrote my own cross-assembler using macros for the Microsoft Macro-80 assembler running on a TRS-80.
Sign In or Register to comment.