If it became way cheaper to make chips, I'm sure better tool methodologies would be adopted, as more risk could be tolerated.
Looks like Chip is spot on there.
Today you can design your own SoC. Use a RISC V core. Surround it by your own custom peripherals, accelerators, whatever your specific application needs. Get it all working on FPGA which is very cheap now.
Then ship the Verilog off to SiFive and they will get you a hundred test chips from TSMC for $100,000 dollars. https://www.sifive.com/
This all goes hand in hand with Berkeley using their Chisel language and Dolu creating SpinalHDL. With such friendly systems one can tap into all the thousands of software engineers out there. Instead of all those expensive Verilog gurus demanding huge salaries. Or at least your turn around time on designs can be a lot quicker.
Agile development of hardware !
I should be able to post my SpinalHDL design to somebody like SiFive over a web interface and get my chips back pretty quick. Like ordering a PCB form Oshpark: https://oshpark.com/
I missed this the first time through. There are Amazon F1 instances as well with Xilinx Ultrascale parts. If you do a little searching you can find Jan Gray talking about putting Smile loads of RISC Vs into one. (Where Smile loads means something like 1680?)
Seems my imaginary multi-core RISC V SoC together with:
My home made peripherals and secret sauce (TBD)
A megabyte of RAM
100 pins
Plastic package
Consumer temp range.
3 by 3 mm die.
Will cost me 143,684 USD NRE for 1000 chips and a production die cost of 2.90 USD.
@KeithE,
Where do they state this? Also note that this wouldn't get you into production.
They don't seem to talk about that on the web site. The 100K estimate has been stated by one of the SiFive team, more than once, during various presentations at RISC V conferences. I have been putting myself to sleep at night by listening to a lot of those. Sadly I can't find the relevant ones to refer you to.
Not sure of the vendors name. They have FPGA like chips with custom masks for the program instead of RAM. They take your working verilog/vhdl code and turn it into masks. Relatively cheap but has the downfall of being a Masked ROM style FPGA using more power than a specialised ASIC, plus the cost per chip will always be higher than a custom chip because you are paying for the FPGA IP.
Fits in between FPGA and ASIC for lowish volumes.
Seems to me there should be a way for the little guys to pool their resources and get their custom logic into an ASIC much cheaper.
Let's say it's possible to get that RISC-V core, 1MB RAM and some essential/common peripherals, UART, USB, SPI, I2C, GPIO into an ASIC for 150K USD.
Let's also say that having done that there is still a big blob of gates free. Enough for 10 little blobs of custom logic. Heck, I can get a RISC V into a DE0 Nano with 90% free space.
So then, all we need is for 10 or so people to come forward and add their custom logic to the design and cough up 15K USD.
At that point we pull the trigger and order the chip. Boom, everyone in the group gets their custom chip for 15K. Excellent.
Of course in the finished chip not all of those blobs of custom logic can be in use at the same time. So, just arrange that by pulling some "blob select" pins high and low some gates enable or disable the blobs. Say only one blob gets enabled when installed in a board.
(Isn't this how Oshpark operates with it's PCB service, gather up lots of little board designs and get them made together?)
And of course this is only workable when everyone's blobs are open source. I can't see anyway of doing it with secret netlists or whatever.
Could be as simple as having contributors to the pool pushing the Verilog/VHDL/Spinal of their blob designs to a github project.
I was talking to a chip designer in Mountain View the other week about getting chips made. He suggested that the thing to do was get friendly with some folks in a university who do exactly this kind of joint run for their students projects. He reckoned that with the right friends one could sneak ones design into a run and get a chip made for free!
Heater - I know about the possibility of getting shuttle chips and sharing mask costs. But you said deliver Verilog which implies a lot of EDA work that they would have to do. Chip has experience with the going rates there. It doesn't make sense to me that they would quote such a low cost. Not only are the tools are personal expensive, it's an opportunity cost to use these resources on a chip that's not going to make money.
Not sure of the vendors name. They have FPGA like chips with custom masks for the program instead of RAM. They take your working verilog/vhdl code and turn it into masks. Relatively cheap but has the downfall of being a Masked ROM style FPGA using more power than a specialised ASIC, plus the cost per chip will always be higher than a custom chip because you are paying for the FPGA IP.
Fits in between FPGA and ASIC for lowish volumes.
But that is what Signetics and SciFive are saying. How it works out in practice is another matter.
In my, admittedly naive, world view, if my Verilog works under simulation (Icarus and Verilator) and it works on one or more FPGA architectures then they should not have much work to do to get it to work in an ASIC. Especially if there is nothing funky in there like PLL, fuses or analog circuitry, etc. Surely they have their process sorted out by now?
I might find out soon. I checked the box for a Signetic's guy to contact me. The boss is always pushing me to explore hardware options. Well, the company obviously needs an ASIC, right?
Not sure I could afford to register as a student now a days. I'll soon be eligible for a senior citizen discount though
Compilation of memories (RAM and ROM)
Design of pad ring (any custom I/Os? Or characterization of existing ones under different conditions?)
As you said - custom IPs?
Potential for power islanding - design and verification that it was done correctly. Insertion and verification of isolation logic
Design of package
Insertion of test logic and generation of test patterns
Design of test fixture and production test vectors
Static timing analysis and working with the customer to close timing. (Speeding up logic and identifying false or multicycle paths)
Linting of RTL and fixing issues related to testability etcetera
Formal verification (and potential feedback to customer on the impact of coding style on the runtime)
Running of gate sims (Icarus won't cut it.)
Whatever I'm forgetting at the moment...
We have been puting RAM and ROM in chips for decades. I would hope that driving RAM from my Verilog is a done deal by now.
* Design of pad ring (any custom I/Os? Or characterization of existing ones under different conditions?)
We have also had I/O pins on chips for decades. I'm assuming nothing special there. Should just work.
* As you said - custom IPs?
I just want a pile of Verilog put into a chip. I don't want to buy IP blocks. I don't want them to customize anything. I may have open source peripherals and such in that pile but that is just my pile of Verilog.
* Potential for power islanding...
Not sure what you mean there. I just want it to start working when power is applied and reset is released.
* Design of package
What design of package? Just put it in whatever standard package you have and bond pads to pins.
* Insertion of test logic and generation of test patterns
* Design of test fixture and production test vectors
* Static timing analysis and working with the customer to close timing. (Speeding up logic and identifying false or multicycle paths)
* Linting of RTL and fixing issues related to testability etcetera
* Formal verification (and potential feedback to customer on the impact of coding style on the runtime)
* Running of gate sims (Icarus won't cut it.)
Hmm...
This all sounds like having to go to excruciating lengths to verify that my C compiler has generated the right binary code from my source. And then having to verify that my processor executes those binary instructions correctly.
It has happened that compiler bugs have caused my code to fail. It has happened that CPU bugs have caused my code not to execute correctly. It's pretty damn rare though.
To my mind, if my Verilog design works, logically at least, under simulation and on a couple of different FPGA architectures then the problem is with the ASIC vendors synthesis tools and/or process. I hope they don't want me to pay for that!
Except for timing.
Well heck. If my design runs at 100MHz on an FPGA, which it does, and that is what I want, I hope they can beat that by a huge margin in an ASIC.
Should not be a problem.
Are these vendors charging us for all kind of stuff we don't need?!
Anyway, thanks, that is a good list of points to bring up when I talk to the guy from Signetics.
One thing to ponder - if you get a custom RAM compiled that nobody has never done before, then how it is checked and characterized for timing? Can you see how this could require some serious computer resources and might go wrong? Also many alternatives may be explored along the way. Do you care more about speed or power...
Err, yeah, you got me there. I did have a little trouble getting the Quartus tool to infer my Verilog RAM as RAM blocks. But hey, given that I had never written a line of Verilog much before that and never used the Quartus tools I think that was resolved pretty quickly.
The C compiler analogy should be right by now. We have been doing this stuff for decades.
What things are done after synthesis?
No I don't want a gate array. I'm investigating ASIC.
I don't want a custom RAM. I want whatever it is they have been using for years.
Don't get me wrong. I'm totally naive about all this and have no experience. I can well imagine lots of things can go wrong.
What you are pointing to is that the chance of failure goes up with complexity. Custom this or that, power islands, size, speed, etc.
I like to think that in the extreme if my Verilog describes only a single NAND gate the chance of it working is 100%. So, at what point does the chance of success drop to 99% or50% ? Assuming no extensive work by the chip vendor.
A couple of decades ago or more ago, one project I worked on required a custom radio modem. One engineer spent about a year designing that and building a prototype in TTL and other chips on a huge board. Then they gave the design to a young engineer who wrote the thing as VHDL and checked it out on FPGA. Took 3 FPGA boards to fit the design into at the time. His test setup was an impressive mess of boards and LSA's sprawling across the bench. Then in what seemed like no time I had modem chips to test my software against. Magic!
Surely things have gotten even better since then. No?
After synthesis - the most likely disruption would be test insertion at the gate level. You need to make sure that the result is equivalent in functional mode.
I think you will want the ability to compile ROMs and RAMs - maybe there would be off the shelf ones that are close enough, but it's typical to build what you really need.
When we needed custom GPS radios we typically made test chips and connected them into FPGAs. If not then something would be emulated with off the shelf parts. Part of it is about software codevelopment.
Software/hardware co-development is a wonderful thing. That wireless modem took a year to arrive. As did the software. Good job we were doing both at the same time. That meant I could suggest all kinds of tweaks to the interface before they pulled trigger on the ASIC. Some of which were just to make the software side easier. Some of which were required to make the finished product even meet it's specification!
Software/hardware co-development is a wonderful thing. That wireless modem took a year to arrive. As did the software. Good job we were doing both at the same time. That meant I could suggest all kinds of tweaks to the interface before they pulled trigger on the ASIC. Some of which were just to make the software side easier. Some of which were required to make the finished product even meet it's specification!
Along those lines - the book that firmware engineers should buy for the hardware engineers in their lives. (OK I haven't read it, but looked at some of it via his blog https://www.garystringham.com/category/best-practices/)
The RISC-V Reader: An Open Architecture Atlas
Authored by David Patterson, Andrew Waterman
Edition: Beta
The RISC-V Reader is a concise introduction and reference for embedded systems programmers, students, and the curious to a modern, popular, open architecture. RISC-V spans from the cheapest 32-bit embedded microcontroller to the fastest 64-bit cloud computer. The text shows how RISC-V followed the good ideas of past architectures while avoiding their mistake.
Highlights include:
Introduces the RISC-V instruction set in only 100 pages, including 75 figures
* 2-page RISC-V Reference Card that summarizes all instructions
* 50-page Instruction Glossary that defines every instruction in detail
* 75 spotlights of good architecture design using margin icons
* 50 sidebars with interesting commentary and RISC-V history
* 25 quotes to pass along wisdom of noted scientists and engineers
Ten chapters introduce each component of the modular RISC-V instruction set--often contrasting code compiled from C to RISC-V versus the older ARM, Intel, and MIPS architectures--but readers can start programming after Chapter 2.
Is RISC-V going to be something like the Esperanto language?
Something that is a good idea, fixes the edge cases, and seemingly makes total sense in certain circles?
But what makes it compelling to actually use RISC-V?
Or a businessman would ask, what is the economic advantage of starting over (yet again) in creating a CPU? Is it really that much of a step up that it would reduce the price of ASICs or FPGAs?
Is RISC-V going to be something like the Esperanto language?
Something that is a good idea, fixes the edge cases, and seemingly makes total sense in certain circles?
But what makes it compelling to actually use RISC-V?
Or a businessman would ask, what is the economic advantage of starting over (yet again) in creating a CPU? Is it really that much of a step up that it would reduce the price of ASICs or FPGAs?
Good questions.
I'm not sure anyone is claiming RISC-V is a 'step up' as the speed is more determined by the process used.
Where RISC-V will have appeal, is not so much the opcodes themselves, but the fact they are open.
There are already RISC-V FPGA versions, and I saw a company doing a highly compact 8051 & 80386 cores, they claim in about 300 LEs.
To hit those very small core sizes, there is a size/sped trade off, & they use a microcoded engine, so a (say) 100MHz clocked 8051 runs ~ 10 MIPs.
(ie rather like the P2 byte-code engine)
If someone used the same approach on a RISC-V, of a more modest speed, but very compact core, that would break into territory ARM simply does not cover.
Is RISC-V going to be something like the Esperanto language?
I see what you are getting at. I'm not sure that analogy with human languages stands up. But I would say "no". There is nothing new in the RISC V instruction set that anyone familiar with such things would not understand.
But what makes it compelling to actually use RISC-V?
The instruction set of a computer is the way that our software talks to the hardware to get it to do what we want. It is probably the most important standard in computing, never mind all other the standards we have for USB, networking, etc, etc. Perhaps it wold be nice to have an instruction set architecture that is standardized. Perhaps it would be nice to have such a standard that is unfettered by copyright and other licencing restrictions.
Or a businessman would ask, what is the economic advantage of starting over (yet again) in creating a CPU?
RISC V is not CPU.
RISC V is just a specification for a computer instruction set. That is free to use by all. Actual processor designs built to the RISC V specification are available. Both Free and Open Source or otherwise.
The economic advantage is clear. If you want a processor in your design you cannot use Intel x86 and if you want to use ARM that is a lengthy licensing negotiation which costs you money at the end of the day.
You could use any number of other instruction set designs, but then you have the problem of building and maintaining the tool chains to support that.
Is it really that much of a step up that it would reduce the price of ASICs or FPGAs?
Never mind the step up. A RISC V can be built to whatever performance level you like.
But yes, I believe it can make ASICs cheaper. One no longer has to license an ARM or whatever.
In the FPGA world the vendors try to lock you in to their devices by offering CPU cores to drop in to your designs. Altera has the Nios, Xilinx has the MicroBlaze. Perhaps it's better not to get locked into a vendor and keep you options open.
To add to what Heater said check out this quote from Wikipedia "In 2013, Arm stated that there are around 15 architectural licensees, but the full list is not yet public knowledge.[78][82]"
First of all that number is really low. Second of all isn't it interesting that it's a secret?
If I'm building a chip with an embedded processor then I really want access to the source code. I wouldn't necessarily need it, but it's great to have when debugging your system. Also you have some options available if you absolutely need to make a change.
Also I'm not sure how to read your question. For example it could be:
(1) if I'm making my own chip and need an embedded CPU that the end customer won't know about why go RISC V?
(2) If I'm making yet another embedded CPU chip and was going to use ARM then why pick RISC V?
(3) Or if I'm trying to make my own CPU core for such a CPU chip, why give up my own proprietary architecture and go with RISC V?
For #1 your customers won't care about the CPU architecture so it's probably a fairly straightforward internal decision process. For #2 and #3 it could be that you need to discuss with your customers. And for #3 there will probably be some internal flamewars ;-)
Here's someone having some fun - 1 GHz picorv32 variant. Would be nice if they released their "debug spec 0.9 debugging" addition.
... check out this quote from Wikipedia "In 2013, Arm stated that there are around 15 architectural licensees, but the full list is not yet public knowledge.[78][82]"
First of all that number is really low. Second of all isn't it interesting that it's a secret?
I think that 15 means the Opcode variants, like M0, M0+,M3, M4, M4, M4F etc, not the count of customers who 'Send ARM money'.
I guess if you are big enough, you get early access rights....
Some companies I think have/had the right to extend ARM themselves.
15 is the number of ARM licensees who have architectural licenses. (Info may be stale or otherwise inaccurate.) Not the number who merely license the core. They are the ones who can make changes - I don’t know all of the parameters but it’s expensive.
Comments
Today you can design your own SoC. Use a RISC V core. Surround it by your own custom peripherals, accelerators, whatever your specific application needs. Get it all working on FPGA which is very cheap now.
Then ship the Verilog off to SiFive and they will get you a hundred test chips from TSMC for $100,000 dollars.
https://www.sifive.com/
This all goes hand in hand with Berkeley using their Chisel language and Dolu creating SpinalHDL. With such friendly systems one can tap into all the thousands of software engineers out there. Instead of all those expensive Verilog gurus demanding huge salaries. Or at least your turn around time on designs can be a lot quicker.
Agile development of hardware !
I should be able to post my SpinalHDL design to somebody like SiFive over a web interface and get my chips back pretty quick. Like ordering a PCB form Oshpark: https://oshpark.com/
Where do they state this? Also note that this wouldn't get you into production.
The mask set would like cost a few million dollars.
It's such a conundrum that billions of transistors are cheaper than ever, but the cost of that first transistor has shot to the moon.
I missed this the first time through. There are Amazon F1 instances as well with Xilinx Ultrascale parts. If you do a little searching you can find Jan Gray talking about putting Smile loads of RISC Vs into one. (Where Smile loads means something like 1680?)
On the other end of the scale there is ASICs. Getting a batch of ASICs made seems to now be down to 150K USB at 130nm or 500K a6 65nm.
http://blog.zorinaq.com/asic-development-costs-are-lower-than-you-think/
Hmm....this is fun, I just found this ASIC cost calculator by Signetics: http://www.sigenics.com/page/Asic-Cost-Calculator
Seems my imaginary multi-core RISC V SoC together with:
My home made peripherals and secret sauce (TBD)
A megabyte of RAM
100 pins
Plastic package
Consumer temp range.
3 by 3 mm die.
Will cost me 143,684 USD NRE for 1000 chips and a production die cost of 2.90 USD.
@KeithE, They don't seem to talk about that on the web site. The 100K estimate has been stated by one of the SiFive team, more than once, during various presentations at RISC V conferences. I have been putting myself to sleep at night by listening to a lot of those. Sadly I can't find the relevant ones to refer you to.
Fits in between FPGA and ASIC for lowish volumes.
Let's say it's possible to get that RISC-V core, 1MB RAM and some essential/common peripherals, UART, USB, SPI, I2C, GPIO into an ASIC for 150K USD.
Let's also say that having done that there is still a big blob of gates free. Enough for 10 little blobs of custom logic. Heck, I can get a RISC V into a DE0 Nano with 90% free space.
So then, all we need is for 10 or so people to come forward and add their custom logic to the design and cough up 15K USD.
At that point we pull the trigger and order the chip. Boom, everyone in the group gets their custom chip for 15K. Excellent.
Of course in the finished chip not all of those blobs of custom logic can be in use at the same time. So, just arrange that by pulling some "blob select" pins high and low some gates enable or disable the blobs. Say only one blob gets enabled when installed in a board.
(Isn't this how Oshpark operates with it's PCB service, gather up lots of little board designs and get them made together?)
And of course this is only workable when everyone's blobs are open source. I can't see anyway of doing it with secret netlists or whatever.
Could be as simple as having contributors to the pool pushing the Verilog/VHDL/Spinal of their blob designs to a github project.
I was talking to a chip designer in Mountain View the other week about getting chips made. He suggested that the thing to do was get friendly with some folks in a university who do exactly this kind of joint run for their students projects. He reckoned that with the right friends one could sneak ones design into a run and get a chip made for free!
Heater - I know about the possibility of getting shuttle chips and sharing mask costs. But you said deliver Verilog which implies a lot of EDA work that they would have to do. Chip has experience with the going rates there. It doesn't make sense to me that they would quote such a low cost. Not only are the tools are personal expensive, it's an opportunity cost to use these resources on a chip that's not going to make money.
I think you're thinking of baysand mcsc Cluso
https://www.baysand.com/technology
It's a mystery to me. Unknown territory.
But that is what Signetics and SciFive are saying. How it works out in practice is another matter.
In my, admittedly naive, world view, if my Verilog works under simulation (Icarus and Verilator) and it works on one or more FPGA architectures then they should not have much work to do to get it to work in an ASIC. Especially if there is nothing funky in there like PLL, fuses or analog circuitry, etc. Surely they have their process sorted out by now?
I might find out soon. I checked the box for a Signetic's guy to contact me. The boss is always pushing me to explore hardware options. Well, the company obviously needs an ASIC, right?
Not sure I could afford to register as a student now a days. I'll soon be eligible for a senior citizen discount though
Compilation of memories (RAM and ROM)
Design of pad ring (any custom I/Os? Or characterization of existing ones under different conditions?)
As you said - custom IPs?
Potential for power islanding - design and verification that it was done correctly. Insertion and verification of isolation logic
Design of package
Insertion of test logic and generation of test patterns
Design of test fixture and production test vectors
Static timing analysis and working with the customer to close timing. (Speeding up logic and identifying false or multicycle paths)
Linting of RTL and fixing issues related to testability etcetera
Formal verification (and potential feedback to customer on the impact of coding style on the runtime)
Running of gate sims (Icarus won't cut it.)
Whatever I'm forgetting at the moment...
Well, yeah, but...
* Compilation of memories (RAM and ROM)
We have been puting RAM and ROM in chips for decades. I would hope that driving RAM from my Verilog is a done deal by now.
* Design of pad ring (any custom I/Os? Or characterization of existing ones under different conditions?)
We have also had I/O pins on chips for decades. I'm assuming nothing special there. Should just work.
* As you said - custom IPs?
I just want a pile of Verilog put into a chip. I don't want to buy IP blocks. I don't want them to customize anything. I may have open source peripherals and such in that pile but that is just my pile of Verilog.
* Potential for power islanding...
Not sure what you mean there. I just want it to start working when power is applied and reset is released.
* Design of package
What design of package? Just put it in whatever standard package you have and bond pads to pins.
* Insertion of test logic and generation of test patterns
* Design of test fixture and production test vectors
* Static timing analysis and working with the customer to close timing. (Speeding up logic and identifying false or multicycle paths)
* Linting of RTL and fixing issues related to testability etcetera
* Formal verification (and potential feedback to customer on the impact of coding style on the runtime)
* Running of gate sims (Icarus won't cut it.)
Hmm...
This all sounds like having to go to excruciating lengths to verify that my C compiler has generated the right binary code from my source. And then having to verify that my processor executes those binary instructions correctly.
It has happened that compiler bugs have caused my code to fail. It has happened that CPU bugs have caused my code not to execute correctly. It's pretty damn rare though.
To my mind, if my Verilog design works, logically at least, under simulation and on a couple of different FPGA architectures then the problem is with the ASIC vendors synthesis tools and/or process. I hope they don't want me to pay for that!
Except for timing.
Well heck. If my design runs at 100MHz on an FPGA, which it does, and that is what I want, I hope they can beat that by a huge margin in an ASIC.
Should not be a problem.
Are these vendors charging us for all kind of stuff we don't need?!
Anyway, thanks, that is a good list of points to bring up when I talk to the guy from Signetics.
The C compiler analogy isn't quite right. Keep in mind that things are done after synthesis which can break the design.
Power islanding - power off parts of the die to save power.
Standard package - e.g. take a close look at a BGA. The FR4 is most likely going to be unique for each chip.
Anyways I won't address everything, but it sounds like you really want something like a gate array.
Your link said Sigenics, not Signetics, Signetics was the US Semi company Philips bought.
You might not want to call his company Signetics
Err, yeah, you got me there. I did have a little trouble getting the Quartus tool to infer my Verilog RAM as RAM blocks. But hey, given that I had never written a line of Verilog much before that and never used the Quartus tools I think that was resolved pretty quickly.
The C compiler analogy should be right by now. We have been doing this stuff for decades.
What things are done after synthesis?
No I don't want a gate array. I'm investigating ASIC.
I don't want a custom RAM. I want whatever it is they have been using for years.
Don't get me wrong. I'm totally naive about all this and have no experience. I can well imagine lots of things can go wrong.
What you are pointing to is that the chance of failure goes up with complexity. Custom this or that, power islands, size, speed, etc.
I like to think that in the extreme if my Verilog describes only a single NAND gate the chance of it working is 100%. So, at what point does the chance of success drop to 99% or50% ? Assuming no extensive work by the chip vendor.
A couple of decades ago or more ago, one project I worked on required a custom radio modem. One engineer spent about a year designing that and building a prototype in TTL and other chips on a huge board. Then they gave the design to a young engineer who wrote the thing as VHDL and checked it out on FPGA. Took 3 FPGA boards to fit the design into at the time. His test setup was an impressive mess of boards and LSA's sprawling across the bench. Then in what seemed like no time I had modem chips to test my software against. Magic!
Surely things have gotten even better since then. No?
"Sigenics, not Signetics"
Oops. Maybe I already need that Senior Citizens Bus Pass !
I have never heard of Sigenics before.
I think you will want the ability to compile ROMs and RAMs - maybe there would be off the shelf ones that are close enough, but it's typical to build what you really need.
When we needed custom GPS radios we typically made test chips and connected them into FPGAs. If not then something would be emulated with off the shelf parts. Part of it is about software codevelopment.
Along those lines - the book that firmware engineers should buy for the hardware engineers in their lives. (OK I haven't read it, but looked at some of it via his blog https://www.garystringham.com/category/best-practices/)
https://www.amazon.com/gp/product/1856176053/ref=ox_sc_sfl_title_4?ie=UTF8&psc=1&smid=ATVPDKIKX0DER
http://www.riscvbook.com/
The RISC-V Reader: An Open Architecture Atlas
Authored by David Patterson, Andrew Waterman
Edition: Beta
The RISC-V Reader is a concise introduction and reference for embedded systems programmers, students, and the curious to a modern, popular, open architecture. RISC-V spans from the cheapest 32-bit embedded microcontroller to the fastest 64-bit cloud computer. The text shows how RISC-V followed the good ideas of past architectures while avoiding their mistake.
Highlights include:
Introduces the RISC-V instruction set in only 100 pages, including 75 figures
* 2-page RISC-V Reference Card that summarizes all instructions
* 50-page Instruction Glossary that defines every instruction in detail
* 75 spotlights of good architecture design using margin icons
* 50 sidebars with interesting commentary and RISC-V history
* 25 quotes to pass along wisdom of noted scientists and engineers
Ten chapters introduce each component of the modular RISC-V instruction set--often contrasting code compiled from C to RISC-V versus the older ARM, Intel, and MIPS architectures--but readers can start programming after Chapter 2.
I'm not going to buy it. I think I have heard it all in the presentations about RISC V I have seen on YouTube for some years now.
I am tempted to go for Patterson's classic book "Computer Organization and Design"
Now in a new RISC-V Edition:
https://www.amazon.com/Computer-Organization-Design-RISC-V-Architecture/dp/0128122757/ref=sr_1_1?ie=UTF8&qid=1508102671&sr=8-1&keywords=David+Patterson+computer+risc+v
Something that is a good idea, fixes the edge cases, and seemingly makes total sense in certain circles?
But what makes it compelling to actually use RISC-V?
Or a businessman would ask, what is the economic advantage of starting over (yet again) in creating a CPU? Is it really that much of a step up that it would reduce the price of ASICs or FPGAs?
Good questions.
I'm not sure anyone is claiming RISC-V is a 'step up' as the speed is more determined by the process used.
Where RISC-V will have appeal, is not so much the opcodes themselves, but the fact they are open.
There are already RISC-V FPGA versions, and I saw a company doing a highly compact 8051 & 80386 cores, they claim in about 300 LEs.
To hit those very small core sizes, there is a size/sped trade off, & they use a microcoded engine, so a (say) 100MHz clocked 8051 runs ~ 10 MIPs.
(ie rather like the P2 byte-code engine)
If someone used the same approach on a RISC-V, of a more modest speed, but very compact core, that would break into territory ARM simply does not cover.
RISC V is just a specification for a computer instruction set. That is free to use by all. Actual processor designs built to the RISC V specification are available. Both Free and Open Source or otherwise.
The economic advantage is clear. If you want a processor in your design you cannot use Intel x86 and if you want to use ARM that is a lengthy licensing negotiation which costs you money at the end of the day.
You could use any number of other instruction set designs, but then you have the problem of building and maintaining the tool chains to support that. Never mind the step up. A RISC V can be built to whatever performance level you like.
But yes, I believe it can make ASICs cheaper. One no longer has to license an ARM or whatever.
In the FPGA world the vendors try to lock you in to their devices by offering CPU cores to drop in to your designs. Altera has the Nios, Xilinx has the MicroBlaze. Perhaps it's better not to get locked into a vendor and keep you options open.
First of all that number is really low. Second of all isn't it interesting that it's a secret?
If I'm building a chip with an embedded processor then I really want access to the source code. I wouldn't necessarily need it, but it's great to have when debugging your system. Also you have some options available if you absolutely need to make a change.
Also I'm not sure how to read your question. For example it could be:
(1) if I'm making my own chip and need an embedded CPU that the end customer won't know about why go RISC V?
(2) If I'm making yet another embedded CPU chip and was going to use ARM then why pick RISC V?
(3) Or if I'm trying to make my own CPU core for such a CPU chip, why give up my own proprietary architecture and go with RISC V?
For #1 your customers won't care about the CPU architecture so it's probably a fairly straightforward internal decision process. For #2 and #3 it could be that you need to discuss with your customers. And for #3 there will probably be some internal flamewars ;-)
Here's someone having some fun - 1 GHz picorv32 variant. Would be nice if they released their "debug spec 0.9 debugging" addition.
https://groups.google.com/a/groups.riscv.org/forum/m/#!topic/hw-dev/kppLfANB8Pw
I think that 15 means the Opcode variants, like M0, M0+,M3, M4, M4, M4F etc, not the count of customers who 'Send ARM money'.
I guess if you are big enough, you get early access rights....
Some companies I think have/had the right to extend ARM themselves.
Eg StrongARM
Notice the FAB stats on these parts, similar die size to P2, but less advanced process. (0.35, 0.25μm)