Shop OBEX P1 Docs P2 Docs Learn Events
RISC V ? - Page 18 — Parallax Forums

RISC V ?

1151618202123

Comments

  • cgraceycgracey Posts: 14,133
    edited 2018-04-03 03:49
    I've heard that IP expenses are big portions of the cost in making fancy ASICs these days. The ARM bill is probably minor amid the specialty I/O blocks that make up a typical SoC design.

    RISC V may need a similar eco system, in order to routinely displace ARM. Do you think that's likely, without some USB/HDMI-like organization to ratify what's RISC V-compatible, as it exacts tribute, all along the way?

    Certainly, simplifying the business model is good.
  • You can count the # of ARM architectural licenses with your fingers and toes. It's definitely a level up from just licensing a core. From a 2015 article:

    "The company has seven publicly announced 64-bit architectural licensees: Applied Micro, Broadcom, Cavium, Apple, Huawei, Nvidia, AMD and Samsung. It also has another seven publicly announced 32-bit architectural licensees, of which five – Marvell, Microsoft, Qualcomm, Intel and Faraday – do not have a 64-bit licence."

    From wikipedia:

    "Companies can also obtain an ARM architectural licence for designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture"
  • potatoheadpotatohead Posts: 10,253
    edited 2018-04-03 06:29
    Custom silicon is going to increase in prominence, if you ask me. There isn't anywhere else to go at present, and where we are just isn't quite good enough for the next wave of problems and needs to be addressed.

    The way I see it, on the consumer product end of things, an open ISA + tools, means being able to combine various things, network, graphics, sound, specialized I/O, touch perhaps, and whatever specific thing makes the end product well differentiated, and doing so at very compelling power and performance points.

    Then, all one needs is to touch up the tools to incorporate whatever specialized things are part of the package and go.

    What I see driving all that is another round of miniaturization, and with that, the need to make whatever battery is needed work for more than a daily charge cycle.

    Watch as Apple takes ARM and does exactly that. For Apple, the ARM investment is significant, though who knows? Either way, great case in point. They will sell those products to the posh, high end, 15 to 20 percent of us willing to pay for a very polished, performant experience.

    IMHO, on the high end, those same ideas may well find their way into data center, and or specialized compute applications. We've hit a single thread wall. It's not absolute, but it's one hell of a lot stiffer than it was not so long ago. The problem remains single threaded compute performance.

    It's just needed. Big, important pieces of software are not so easily multi-threaded. And even if they are, the man-hour investment required to do that is insane. Talking kilo-years here. Minimum. With a nice chunk of that being debug, and another nice chunk being edge case handling, and testing.

    That same open ISA, coupled with task based silicon, much like we've done here on the P2, will mean packaging high performance things into hardware, specialized to maximize performance.

    CAD is one area that could really use this, as an example I'm very familiar with. Geometry kernels, at least the ones that work on NURBS, B-Rep solids, are single threaded things. And they are massive investments. Takes years and years of handling tons of cases people think up to get one mature and able to represent real world things in a robust, efficient way.

    Clever modeling can improve on that, (where multi-threading is concerned) but at best, the final part stitch up is single threaded. The CAM work associated with all that is just as intense, if not a bit more, like say with micro-machining, or just a big, detailed casting type part. And that help, if possible or practical, does nothing for the massive data objects already out there, and in use. A growing, very significant problem.

    A fairly standardized representation of the core entities needed for topology resolution could yield big gains in overall operations possible per minute, which would translate into parts with higher surface complexities being more possible and or practical. Right now, these kinds of things get designed in chunks, where the boundaries between the chunks can be an issue to design across, or even plan. Auto and aero both experience this.

    Think of it kind of like a specialized math function, and where a software loop gets us there, a one or few instruction silicon implementation can be an order faster.

    Just last year a friend, where I used to work as general manager with a focus on engineering, was micro-machining injection molds. I was frankly, stunned at the time required to generate and then validate tool paths, and the RAM! 64Gb was a reasonable, conservative amount of RAM. Generation can be multi-threaded in that each atomic operation can be computed in parallel with the others, but one is still bound by the longest one, and in this case, it was taking days, and bound by the lower performance that happens when all cores are really chewing on big data and compute. (less significant with a big cache, but still sometimes measured in hours)

    The very fastest compute I could get was an Intel 5670DX chip. (I might have that wrong, but it's close) Found someone willing to sell me an overclocked machine running just shy of 5ghz. That's it. Took the work from weeks to days, so very worth it, but so much more could be useful. And that's on some hot-rod motherboard, specific (and I mean very specific as far as matched RAM modules and such) configuration, tweaks, water cooled, the works!

    Back in the 90's, through mid 00's, people would pay for a big boost, and pay big. 5 figures was not uncommon.

    Today, the biggest jump I could get was about $3,500, and it's a sweet machine too, but more is needed. The people involved would have paid a low five figures for more, but it's unobtainium.

    IMHO, other people looking to move hardware will seek to solve these kinds of problems, once the groundwork is there to facilitate doing it. A GPU isn't much help. An FPGA could be, but integrating it is a PITA.

    A CAD CPU, just by way of one example, might just be worth way more than people think, if it can, in fact, take days back to hours, or maybe less! Once the idea of taking something lean and mean, effective at general compute takes root, it being extensible in very specific ways will follow.

    Anyway, my take on all this. Gonna be some interesting times ahead!

  • Cluso99Cluso99 Posts: 18,066
    Humbug I say!

    Royalties on ARM cannot be all that big, at least for the smaller ARM chips. Otherwise, how can they sell for <$1 upwards ?

    When you buy an ARM license, don't you get the silicon set (ie it's not just the ISA architecture, but the silicon reference design too). There are obviously various silicon designs. And there are various designs for various Chip fabricators and also feature size.

    Now, Apple and a couple of others have a full ARM license that gives them the whole enchalada - they can build what they like using any parts of the ARM design.

    So you get a free ISA spec, and an open simple verilog implementation. Where's all the software to support the RISC-V? There are tons of man-years of software development for the ARM (and Intel for that matter). RISC-V will take many years to catch up, if it can.

    Just because Nvidia and Western Digital have said they will use RISC-V doesn't mean they will. Perfect example... Intel with ARM license.
  • There are tons of man-years of software development for the ARM (and Intel for that matter). RISC-V will take many years to catch up, if it can.

    A ton of stuff is a recompile away.

    Again, Apple has transitioned multiple times. Knows how to do it. Soon, they will move it all to ARM. They've had their stuff running on a ton of ISA's in the past.

    Others, given some incentive, and performance could be one such incentive, will do the work. It's the end user, needing specifics, that will fund all of that.
  • jmgjmg Posts: 15,140
    Cluso99 wrote: »
    Just because Nvidia and Western Digital have said they will use RISC-V doesn't mean they will..

    Oh, I expect they will certainly use it, even if just to help talk-down the ARM royalties.
    Helps if they can say to the ARM salesman - "Yeah, we ran RISC-V in product XYZ as a test, worked very well ..."

    There is a market opening, right around now, for a RISC-V with an optimized HyperBUS or OctaSPI interface, for XIP design.

    If I were designing a MPU these days, I'd include a SKIP opcode for exactly that type of memory.
  • jmgjmg Posts: 15,140
    potatohead wrote: »
    A CAD CPU, just by way of one example, might just be worth way more than people think, if it can, in fact, take days back to hours, or maybe less!

    NVIDIA may be there already ?

    https://www10.edacafe.com/nbc/articles/1/1575459/NVIDIA-Reinvents-Workstation-with-Real-Time-Ray-Tracing
  • Cluso99Cluso99 Posts: 18,066
    Just read the SiFive article.

    Reading between the lines, it seems to me that SiFive are trying to gear themselves up to be another ARM... License their proprietary RISC-V IP designs. How open is that?? IMHO no more that ARM designs, perhaps even less.
  • jmgjmg Posts: 15,140
    Cluso99 wrote: »
    Just read the SiFive article.

    Reading between the lines, it seems to me that SiFive are trying to gear themselves up to be another ARM... License their proprietary RISC-V IP designs. How open is that?? IMHO no more that ARM designs, perhaps even less.

    No real surprise, as 'silicon proven' is an important element, and someone has to make the shuttles and run the MHz tests, and it's really that you are paying for.
  • Cluso99Cluso99 Posts: 18,066
    jmg wrote: »
    Cluso99 wrote: »
    Just read the SiFive article.

    Reading between the lines, it seems to me that SiFive are trying to gear themselves up to be another ARM... License their proprietary RISC-V IP designs. How open is that?? IMHO no more that ARM designs, perhaps even less.

    No real surprise, as 'silicon proven' is an important element, and someone has to make the shuttles and run the MHz tests, and it's really that you are paying for.
    Which then removes any advantages over ARM !!!
  • potatoheadpotatohead Posts: 10,253
    edited 2018-04-03 16:05
    jmg wrote: »
    potatohead wrote: »
    A CAD CPU, just by way of one example, might just be worth way more than people think, if it can, in fact, take days back to hours, or maybe less!

    NVIDIA may be there already ?

    https://www10.edacafe.com/nbc/articles/1/1575459/NVIDIA-Reinvents-Workstation-with-Real-Time-Ray-Tracing

    That's a very noteworthy advance, but it's only dealing with the pretty pictures. The hard part is creating the models. Polygon, sub-division surfaces, and other non NURBS representations can benefit from multi-core machines. Concurrency can be had, largely in the entertainment realms, and or assembly, where individual items can be processed together, as they have few to no dependencies. (which, depend on how the models were made and parameterized, obviously)

    A CAD-centric CPU, would need to very significantly improve single thread performance. Applying custom silicon to the compute heavy aspects could do that. There isn't anything else out there, other than very labor intensive, time consuming, and high risk refactoring of incredibly complex software. (geometry kernels --the likes of Parasolid, ACIS and friends.)

    And, for some sense of the demand possible, really good CAD, not mid-range, like say Solidworks (which is a perfectly capable system, BTW), runs from $20 to $100K per seat, potentially, depending on options. Like I wrote above, one can drop under $5K on a workstation, and get the top speeds possible today. The major players, in the major verticals (auto, energy, life sciences, aero, medical) wouldn't think twice about a machine that costs up to several times that much, if it could take days to hours.

    The Ghz jumps we saw earlier, coupled with equally, if not more massive, graphics jumps, were pricey, but totally worth it. Returns were less than a year in many cases. No brainer.

    It has been illuminating to participate on all this P2 stuff. Don't know about you guys, but I've learned a ton!

    Process physics and the limits that go along with that kind of put the damper on RISC type ideas as being primary gains in themselves. Those have kind of peaked, in terms of single thread, big jumps in "sequential compute", or single thread. Sure, speculative execution, big, fast caches, and the like continue to eek out gains, but they are generally modest. I think the chip I got two years ago, mentioned above, got beat this year, but not by a whole lot. It's not worth refreshing a machine over, but it's a nice little bonus when an additional one is needed.

    But, a RISC instruction set, lean, mean, efficient, with some room left in it for differentiation?

    Yeah. That's got some potential, in my view anyway.

    In P2, we've got a lot of little specialized circuits. Makes a big difference. At least we think so. And it all will, and we've got the task of taking all that beyond PASM, which frankly, is sweet! SPIN with in-line PASM is gonna rock. C, with similar kinds of things, and both with cool library code will bring those circuits out to where people can use them. The beauty is the tasks we've nailed, will stay nailed. The balance being just enough software driven to be very generally useful without having to predict end uses. I'll say it now, and that is a very good overall balance got struck over the years. All of us contributed our thoughts, and the outcome is looking pretty great.

    But, that's not necessarily what I'm writing about here in the RISC-V context. Specializing a bit more, targeting some niches that have high costs and risks associated with them, means less overall gain where general purpose compute is concerned. Maybe even a bit of a net loss.

    I don't think people will care, or even notice. My Note 8 phone is actually quite a bit faster than many laptops I see on general purpose computing. It runs on a tiny energy budget, has 5GB RAM, 8 cores, etc...

    Given a workstation class energy budget, custom silicon, coupled to an efficient ISA, has the potential to really deliver on things nobody can actually deliver on otherwise.

    Heck, with Apple now fairly overtly flirting with moving to their own CPU's, ARM based, Microsoft flirting with putting Windows into a lower tier, not primary business driver, this RISC-V effort coming on line...

    It may be that we circle back to the early days again. Specialized machines, only this time networked with a lot of open code and open data we really didn't have the first go around. Well, we did, but the state of software and computing in general was early.

    Seems to me there is a growing and considerable incentive to differentiate and specialize. Money to be made, and that amount is growing as general purpose compute just creeps along, in terms of performance gains available to us.

    Parallel / concurrent solutions that can make effective use of multi-core processors continue to be difficult to actualize. Some of the reasons, for some cases, I've written to here. But, generally speaking, it's just harder, and not broadly applicable.

    Unless there are some real breakthroughs in software, or process limits of some kind, going custom to maximize what the basic semi-conductor technology can do seems a no-brainer. The question in my mind isn't that it will be done.

    Pretty sure it will.

    It's how, and on what terms.

    On the more efficient capable, largely consumer device, IOT axis, those will be sold to people. We will see those devices pretty easy.

    On the higher end I've written to, maybe not. Subscription licensing for the major apps, CAD, eCAD, simulation, etc... seems to be the trend, right down to per use tokens. That all may end up in a data center one just gets access to, with lean and mean, general compute capable, but fast for interaction, more like mobile phone than PC type machines.



  • Heater.Heater. Posts: 21,230
    Cluso99,
    Humbug I say!

    Royalties on ARM cannot be all that big, at least for the smaller ARM chips. Otherwise, how can they sell for <$1 upwards ?
    That is true. All well and good.

    Except...If we want to put a processor core into a corner of an FPGA to manage the logic of our latest gizmo, we can not use the ARM instruction set. Not without lengthy negotiation and a royalty deal. Intel is right out. What to do? We could invent our own ISA, that's easy enough, but then we have to create all the compilers and tools for it as well. I know, let's use that new fangled RISC V. It's open for us to use and all the tools are available.

    Step that up a notch and we could be some start up making an actual chip. RISC V makes that hassle free.
    So you get a free ISA spec, and an open simple verilog implementation.
    Yes. RISC V is just the ISA spec. But there are already far from simple open implementations available. The Berkeley Out-of-Order Machine (BOOM) is a very serious design, pipelined, speculative execution, out of order execution, branch prediction and the rest. It has been taped out many times already and has performance up there with ARM 7.
    Where's all the software to support the RISC-V?
    C/C++ compilers are done, GCC and LLVM. Other languages are done or coming along, Pascal, Go, Rust, Java, etc. Linux, done. Various real-time operating systems, done. It's growing every day.
    Just because Nvidia and Western Digital have said they will use RISC-V doesn't mean they will. Perfect example... Intel with ARM license.
    Nah, what?

    Nvidea has had their own general purpose CPU design in there with their GPUs for years. Now they have decided it needs to move to 64 bits and gain some extra bells and whistles, so they opted to use the RISC V ISA rather than extend what they have or invent a new one. They, if anyone, are not afraid of designing a core of their own to fit the ISA spec.

    Western Digital has been sponsoring the RISC V Foundation, why would they not be serious about it?

  • Heater.Heater. Posts: 21,230
    Cluso99,
    Just read the SiFive article.

    Reading between the lines, it seems to me that SiFive are trying to gear themselves up to be another ARM... License their proprietary RISC-V IP designs. How open is that?? IMHO no more that ARM designs, perhaps even less.
    Let's read the actual lines, rather than imagining what is in between.

    The only thing definitely open here is the RISC V instruction set specification. Actual implementations of that can be as open or closed as their creator likes.

    SiFive for sure is not setting itself up to be another ARM. That ship has sailed. Others have tried and failed, MIPS, Spark..., it would be silly.

    They are offering to help get your chips, using the RISC V spec, into production. No matter if it using their own core IP, or some other open design you have selected or your own secret stuff.

    As for "How open is that?? IMHO no more that ARM designs, perhaps even less.", I don't get what you are saying. That is not even possible. Don't you think it is kind of sick that I cannot design my own processor into my product that happens to use the same instruction set as an ARM?


  • Cluso99Cluso99 Posts: 18,066
    edited 2018-04-04 04:39
    Heater. wrote: »
    ...
    As for "How open is that?? IMHO no more that ARM designs, perhaps even less.", I don't get what you are saying. That is not even possible. Don't you think it is kind of sick that I cannot design my own processor into my product that happens to use the same instruction set as an ARM?
    Absolutely no argument here!

    The original micros did not have patents on their ISA AFAIK. So at least those instructions should be free to use. And they are the base ones.

    Zilog implemented equivalent 8080 instructions without the "rath" of Intel.

    Intel later tried to block AMD using the x86 ISA, and then there was that license agreement.

    IMHO it's no different to the API's argument going on between Oracle and Google.

    The whole copyright/patent stew as applies to computers and software is just there to make the lawyers rich! And who gets to preside over the cases? Ex-lawyers !!!

    So we agree on many things heater ;)
  • Heater.Heater. Posts: 21,230
    edited 2018-04-04 08:25
    I guess it can happen that somebody dreams up a conceptually new CPU instruction that is worthy of a patent. Say it was one of Intel's memory protection scheme things or some such. The RISC V guys have been taking care that everything they specify is patent unencumbered, they trace the prior art and such going back decades.

    Then there is the copyright problem. I don't begin to understand how an instruction format get's copyright protection. But then the x86 has thousands of instructions which together make a substantial body of work worthy of copyright. RISC V does not copy from anyone so it should be OK there.

    I was thinking, by way of an example, what if Chip had a brainstorm and decided that the Propeller 3 should have a general purpose CPU bolted on? Perhaps something man enough to run Linux. Tightly coupled to the COGs and HUB RAM of course. Along with Chips own magic sauce. Unlikely I know but bear with me.

    What would Chip do? Design his own instruction set and create another CPU? Unlikely, that last one has taken 10 years! And then there would be no compilers, linux or software tools for it. License an ARM core? That is going to months of negotiation and be expensive. Besides I don't think it fits with Chip's philosophy of independence. And integrating it with the HUB/COGs may not be so easy. Use some existing open core, OpenRisc for example. Nah, not enough support, and mind share is going to RISC V.

    No, the answer is to use a RISC V core. The Rocket, or BOOM, or other. Get SiFive or such to help. Job done.

    I think there are probably many little startups and such going through that train of thought today. Which is why RISC V is getting so much attention.

    The kicker would be Apple.

    Apple are masters of changing architectures. 6502 to 68000 to PowerPC to ARM and x86. They could switch to RISC V.

    ARM only exists because of Apple. Quite likely if Apple had not selected ARM for their Newton project then ARM would have gone down the tubes. If I understand correctly that decision inspired Nokia to select ARM for their phones.

    Could it happen that Apple decides to cut out the middleman, Softbank, and move to RISC V? They have the expertise and the money to do so. They already customize their ARM SoCs heavily, a swap of instruction set might not be such a big step for them.



  • Cluso99Cluso99 Posts: 18,066
    edited 2018-04-04 08:47
    heater,
    Did you see Intel shares dropped 9% IIRC on Apple rumours of ditching Intel for its' own ARM cores in Macs. They say Apple accounts for 5% of Intel sales - unbelievable.

    I said a couple of years back that I felt sure Apple were working on this (no inside info). Intel caused Apple grief a couple of years back when their laptop chips were overly late. Bet the project started then (or before).
  • Heater.Heater. Posts: 21,230
    That news floated past me. It did not mention that it was a switch to ARM in the headline. I assumed it was a April Fool's day joke and skipped over it!

  • Apple has a very strong incentive to own it's whole tech stack.

    They cater to a specific set of people, and those people pay well for the end product.

    If Apple does move entirely onto ARM, it will be very likely to be so they can offer privacy and really mean it, and do so at very favorable performance / energy ratios.

    Intel limits them there, and I'll bet their ARM team can do better, while delivering solid performance. Not peak, as doing that is currently a hot mess. Intel's game for now.

    I will also bet Apple goes a little of the way SGI did, and will augment their CPU with some hardware. The important tasks will get the assist they need to be superior experiences.

  • AleAle Posts: 2,363
    If Apple does move entirely onto ARM, it will be very likely to be so they can offer privacy and really mean it, and do so at very favorable performance / energy ratios.

    Their SoC are already faster than anything found on the other phones/tablets... with less cores !. I always wondered why, if you look at the (paucity) of comparisons and descriptions is clear that they have the wider issue unit, longer pipeline and most execution ports of all implementations, that gives them a noticeable edge. No way to deny it.
  • Truth. :D
  • Heater.Heater. Posts: 21,230
    That makes me wonder...

    If Apple has a license for the ARM instruction set, but then they invest a huge amount of money into building their own implementation, with all that "wider issue unit, longer pipeline and most execution ports" etc, then why should they bother with the ARM ISA?

    One could do all that with the RISC V ISA and cut ARM out of the loop.

    Not only that, Apple is heavily invested in LLVM, the compiler tool chain for C/C++ and other languages, which of course supports RISC V now.

    Looks to me like they are all good to go for CPU vendor independence.
  • potatoheadpotatohead Posts: 10,253
    edited 2018-04-04 23:18
    That all depends on the cost of the license balanced against the cost of ISA change, inclusive.

    Near term, say a decade? I'll bet that equation does not favor a move off ARM.

    Longer term? Bet it eventually does.

    If there is any merit to that, Apple will make an internal port and begin the same migration process they have done a few times now.



  • One advantage RISC-V has over ARM, that nobody has mentioned yet, is that the ISA is both extensible and optimized.

    (1) RISC-V explicitly saves a big part of its instruction set for various extensions, including allowing for user extensions. This makes it very attractive for use in FPGAs and custom silicon, since it means the user can define custom instructions for whatever special hardware operations they need. (It would be cool to see a RISC-V with Propeller instructions like "waitcnt", "waitpeq", "lockclr", etc. added on; I think the Prop-1 instruction set, at least, could fit in a RISC-V extension).

    (2) The ARM instruction set has grown over the years, and has some compatibility cruft (like having both Thumb and Thumb2) and design decisions (like having PC appear as a regular register) that are suboptimal for today's processors. RISC-V is a clean sheet design, and has learned from all the previous RISC instruction sets. In general a RISC-V processor can run at a higher clock rate than a similar ARM (instruction decoding is easier) and can achieve better performance, all other things being equal.
  • Heater.Heater. Posts: 21,230
    That's right. There is a lot of scope for extending the RISC V instruction set whilst at the same time having the support of the existing compilers and tools for the standard base ISA and extensions.

    I had already been pondering the idea of adding instructions to do a Prop like INA, OUT, DIR, thus getting that tight coupling between the processor and the I/O like the Prop. Also things like WAITCNT, WAITPIN. Not the same instructions exactly but the same effect.

    I was not so bold as to imagine putting the entire Prop instruction set in there as an extension! How would that look Eric? The architectures are so different. RISC V does not execute from it's registers for start.

    Would you add 512 registers to the RISC V and a "cogstart" instruction to load them up with code and get them running. That would be spectacular!

    As it happens I spent last weekend getting to grips with the RISC V instruction set and encoding. On the face of it nothing looks very special there. Could be an OpenRISC or MIPS or whatever. But when you look closely you find all kind of little details have been arranged "just so". So as to make implementation simpler, easier, faster and keep compilers happy. And be extensible.

    Anyway the result of my close look at the instruction encoding resulted in me accidentally writing 800 lines of SpinalHDL over the Easter holiday to implement a RISC V. So far it looks like this: https://github.com/ZiCog/sodor-spinal

    What started to drive me nuts, apart from not knowing anything of SpinalHDL or processor architecture, was the horrendously complicated looking way the RISC V immediate values are encoded in various formats. There is a method in it's madness though.
  • jmgjmg Posts: 15,140
    Heater. wrote: »
    I was not so bold as to imagine putting the entire Prop instruction set in there as an extension! How would that look Eric? The architectures are so different. RISC V does not execute from it's registers for start.

    Would you add 512 registers to the RISC V and a "cogstart" instruction to load them up with code and get them running. That would be spectacular!
    Interesting idea.
    What is the relative size of RISC-V and Prop LUT counts ?

    Bumping registers to 512 would likely also slow them down significantly, but if the RISC-V core is small enough, you can have more than one of them.
    Then, adding opcodes to make it Prop-like real time, and giving it small-but-fast local code memory could end up with something that got the best of both worlds.

  • Heater.Heater. Posts: 21,230
    The picoriscv core I have been playing with take 1700 logic elements on a DE0-nano, 8%.
    I have no idea how big a P1 COG is.
  • jmgjmg Posts: 15,140
    On the topic of RISC-V, just seen this....

    https://riscv.org/membership/1896/c-sky/

    'Welcome to the RISC-V Foundation Members Directory. C-SKY is an industry leading IC design house dedicated to 32-bit high-performance and low-power embedded CPU with the licensing of the chip architecture as its core business."

    and
    http://en.aptchip.com/Article/Equipment.aspx?cid=67&nid=25

    Q: does that mean the APT32F003F6P6 TSSOP20 32k Flash part, has a RISC-V core ?
  • Heater.Heater. Posts: 21,230
    Hard to tell. My guess is not, it seems early days yet. I suspect that is an existing design of theirs and they are checking out the lie of the land in the RISC V world.
  • Heater. wrote: »
    I was not so bold as to imagine putting the entire Prop instruction set in there as an extension! How would that look Eric? The architectures are so different. RISC V does not execute from it's registers for start.

    Would you add 512 registers to the RISC V and a "cogstart" instruction to load them up with code and get them running. That would be spectacular!

    Sorry, I was very imprecise. There isn't room to fit all of the Prop1 in! By "instruction set" I just meant that the Prop1 has I think 64 unique instructions (less, a few weren't implemented) and there's certainly room to add that many instructions to RISC-V. I wouldn't try to duplicate the whole architecture. Rather I think it would make more sense to add enough to RISC-V so that it could do everything the Prop could do, but in a RISC-V way (running out of instruction cache rather than registers, for instance).

    The special registers like INA, OUTA could be implemented as RISC-V CSRs, and there are already instructions for manipulating those. We'd need to add wait instructions. MUX might be a bit tricky; there's no carry bit, so MUXC/MUXNC would be right out, but something like MUXZ/MUXNZ could be implemented using the extra source operand on RISC-V. I wouldn't bother trying to fit the branches in (RISC-V already has a complete set of those). cogstart would have to be a bit different, but it could just set up a PC for the new "RISC cog" and start it running out of cache. Things like that.

    I guess what I"m saying is that I think one could make a credible Prop-like machine based on the RISC-V ISA, with some appropriate extensions. Maybe Parallax should consider doing that for Prop3. It would save them a lot of work on tools, and might open up markets in universities that are interested in RISC-V.
    Anyway the result of my close look at the instruction encoding resulted in me accidentally writing 800 lines of SpinalHDL over the Easter holiday to implement a RISC V. So far it looks like this: https://github.com/ZiCog/sodor-spinal

    What started to drive me nuts, apart from not knowing anything of SpinalHDL or processor architecture, was the horrendously complicated looking way the RISC V immediate values are encoded in various formats. There is a method in it's madness though.

    Wow, sounds impressive! I hope I can get a chance to take a look at that soon.

    I agree with you about the immediate encodings -- they look horrible at first, but actually make a weird kind of sense when you implement them. I expect they're especially convenient in hardware, but even in software they turned out not to be as bad as I thought they would be in the Propeller RISC-V emulator.

    Regards,
    Eric
  • Heater. wrote: »
    The picoriscv core I have been playing with take 1700 logic elements on a DE0-nano, 8%.
    I have no idea how big a P1 COG is.

    A P1V cog is similar. P1V without counters and vid would be smaller, or with them would be a little larger.

    There are 4 unused instruction slots in the P1V




Sign In or Register to comment.