With the 180nm process, P1.5 should be able to run faster than P1 -- maybe it could achieve the 200 MHz speed targeted for P2. I think a P1.5 with a hardware multiplier, 200 MHz operating frequency, 64 bits of I/O and 60 KB of hub RAM would be an extremely useful device.
I don't know anything about chip design but it seems to me that it should be possible to modify the Verilog so that the boot code, spin interpreter etc. are in "shadowed" ROM/RAM, making it possible to use the full 64K (not 60K) memory area as RAM with backwards compatibility:
- The upper memory area in the hub is connected to both ROM and RAM.
- Initially, when the upper HUB memory is being read, it reads the ROM and writes the RAM ("copy mode" or "shadow mode").
- Any write attempt to an address of $8000 or higher results in the HUB memory being switched to "normal mode" or "RAM mode": all further reads will happen from the RAM; the ROM remains disabled.
This will change $8000-$FFFF into writeable memory that's initialized from the ROM. This would be completely compatible with existing Spin and PASM code which only reads from those locations. Obviously the boot loader would still only load 32K from the EEPROM but if you have a program that's bigger than 32K, that's easy to work around.
The only issue is that if you want to use the RAM but also need the ROM data, you have to read the area you need before doing the first write to any address above $8000. That should be easy to accomplish, and we (or Chip) might even consider to take care of this in a slightly modified boot ROM: all that's needed is a small loop that reads all data from $8000-$FFFF before it launches the spin runner. (Booting would take a few microseconds longer than before, so maybe that would require a vote to find out if that's critical to anyone).
I think it's way too soon, right now things are just above vaporware status. People have ideas and that's it.
No formalized design, no testing or verification, no benchmarks, etc.
Before people start talking about financing via kickstarter how about talking to people who went through this process to get working silicon first and get some rough cost estimates.
Yeah people talk about Adapteva, who used kickstarter to do initial financing. But they also had 3 experienced IC designers who know the ropes And guess what? That amount of money wasn't enough, they almost went under until they got a $3.6 million cash infusion from Ericsson and a VC guardian angel.
Maybe if Chip was willing to assist and run the show, it would be doable. But I wouldn't trust it in the hands of someone with zero experience in the process.
Maybe if Chip was willing to assist and run the show, it would be doable. But I wouldn't trust it in the hands of someone with zero experience in the process.
I agree, that's why I think the best approach may be for Parallax to do the P1.5 first, and then do the P2. With the published P1 Verilog there are many volunteers who could help Chip with the P1.5 design. This reduces the risk of generating a new chip, and produces a chip that is 2.5 times faster than the P1 with twice the I/O and RAM. Just think of the applications where the P1 doesn't quite have enough memory, speed or I/O, and people end up using 2 P1s or extra external support circuitry. All of that could now be done in a single P1.5.
I was just following Chip's suggestion about using the same process as the P1.
However, if this would result in a die that is too large, than it would be better to use the same process that the P2 will use. If this was done the P2 pin drivers and analog circuitry would be used for P1.5 instead of using P1's pin drivers and analog circuitry. P1.5 could also inherit the smart pin technology, and could provide a stable and know platform for testing out the P2 peripheral circuitry.
Problem is, you have now arrived back at the P2 Station.
The "P2 pin drivers and analog circuitry " are on the ring, and they define the die area, so the P1.5 becomes pretty much the same area as P2, but with a lot less core power.
P2 peripherals can be tested on P1V, and Chip has indicated he is keen to do that.
Also P2Core can be tested on all these P1V Capable boards that are exploding out of the woodwork.
I'd say 180nm + Analog needs to be P2, anything else is dilution and delay.
Problem is, you have now arrived back at the P2 Station.
The "P2 pin drivers and analog circuitry " are on the ring, and they define the die area, so the P1.5 becomes pretty much the same area as P2, but with a lot less core power.
P2 peripherals can be tested on P1V, and Chip has indicated he is keen to do that.
Also P2Core can be tested on all these P1V Capable boards that are exploding out of the woodwork.
I'd say 180nm + Analog needs to be P2, anything else is dilution and delay.
P1V is what many are calling the Verilog variants of P1.
I asked Chip earlier about testing aspects of the P2 Smart Pins using P1V, and he sounded keen.
The Verilog code is not complex, or anything special, but the config and use cases are a little trickier
Yeah people talk about Adapteva, who used kickstarter to do initial financing. But they also had 3 experienced IC designers who know the ropes And guess what?...
It's even worse than that.
The Epiphany chip, the interesting part with 16 floating point cores, was already designed and made by the time of the Kickstarter campaign. The Kickstarter was only to get the Parallella board designed and produced, which moved the Epiphany chip from an ARM + FPGA + Epiphany dev board to the Xilinx Zynq based board we have now.
There are a lot of good answers here already. The answer of course is yes, but there ae several steps and they do of course include solving the financing!
BTW Parallax team, (Chip and Ken in particular) Congratulations on the Propellor, It has a real ability to offer a very great deal to this industry.
There are so many options for delivering a device now, that without even taking into account the possibly of a very large number of (potentially) good competing ideas for the device functionality, there is going tobe some real thinking time needed, somewhere and I don't think that Parallax can or should carry the load by itself.
We want more from the Propellor so we the users should find a way to contribute more and if it helps make Parallax into a bigger employer/manufacturer, I think that it is a poaitive outcome! I have had my Intellectual Property stolen, I know what thats like, I think crowd sourcing is one way to kill that process stone-dead.
I would also like to point out that there are many products that ship with FPGA's inside them. A lot of comms gear, Routers of all sizes, types and brands being a good example, are shipped with the manufactured product having FPGAs sitting on the main board, on daughter boards and even on plug-in cards. There are many benefits to this of course or it wouldn't be a relatively common practice. There are downsides but they can always be beaten.
A few benefits of shipping programmed FPGA's in a product that might be considered are:
Security of Intellectual Property (IP) from conception to final assembly
Security after delivery as many FPGA products have security features that enable the erasure or scrambling of the FPGA Cells if they are attacked or probed.
Cost-Benefit issues
and there are quite a few more. There are of course, always handling, soldering and packaging issues (in the trade and community) that need to be taken into account but these must be considered whatever the device configuration, just perhaps less with DiP than some of the other higher density packages as has been already pointed out.
To explain that by no-means comprehensive ( 3 point) list (especially the last) if one knows that there is time pressures to get your product to market and that you will likely be doing revisions to the hardware through the working life of a product the FPGA offers great advantages in all three points above.
While this is of course, in a proprietary and highly competitive (often pricey) environment, there is no reason why one cannot consider looking at the choices that big manufacturers with relatively small production runs on any hardware revision might make and see if there is a way to modify or shoe-horn their approaches to suit the smaller manufacturer, maker or hobbyist and I think there is.
One real advantage of the FPGA is that (assuming you understand the definition languages) no single one revision need to be chosen as the only production product. The Verilog of a few designs can be kept and maintained allowing some greater options to be available.
To give some examples, I really would love a version of (the truly innovative Propellor) that I can run in 500 Mhz and 1Ghz (yes I know the timing issues, board layout problems etc but nothing is insurmountable), with extra RAM for each cog, LVDS and at least two CSI-2 (or better) interfaces, HDMI plus all the good things that currently exist in the propellor and at least one version with no- mulytiplexing of all those extra pins. This of course is not the extent to my desire, I want some lower speed but secure Propellors with hardware ID and support for SHA-2.
In short there is no shortage of needs and desires out there and it may be that at least until the most popular versions "shake-out" of the mix and FPGA based product is a way out.
Someone more published and perhaps more articulate than I recently said " That to stay static in electronics technology (or any technology), while everyone else is moving forward is really to go backwards"!
To give some examples, I really would love a version of (the truly innovative Propellor) that I can run in 500 Mhz and 1Ghz (yes I know the timing issues, board layout problems etc but nothing is insurmountable), with extra RAM for each cog, LVDS and at least two CSI-2 (or better) interfaces, HDMI plus all the good things that currently exist in the propellor and at least one version with no- mulytiplexing of all those extra pins.
2) With the new module community develop an abstract soft-peripheral library that uses the propeller transparently through java objects similar to Arduino ( but dynamic! )
eg I2CMaster i2c = new I2CMaster();
3) Merge the P1 core with a AE18 core ( http://opencores.org/project,ae18 ) to create an FPGA version of the module
- Requires a FLASH process
- Requires security
- Will be closed source, perhaps with only a small team of core community developers, all other developers use the module from #1
- No changes to P1 would be made
4) Crowdfund an ASIC from the FPGA design
- Ideally put a wireless radio on the chip for wireless ( stretch goal? )
- I know a world class ASIC design who can help with the wireless module
I already have sophisticated tools that work with my java AOT compiler ( called muvium ) based on the PIC18 architecture.
* Frappucino - Arduino framework
* VirtualBreadboard development and emulation environment
* Boogie - Python compiler with java emitter
* VBEE - Visual Basic compiler with java emitter
* Java - javac of course
* JDWP - Java debug wire protocol for debugging in eclipse/netbeans
* Eclipse plugin
* Netbeans plugin
Muvium has been in development since 2003 and I use it in my own products such as the VirtualShield and WaveRaster. I have not released it broadly because discussions with microchip keep going in circles ( they benefit most if I just release it open-source) and also Arduino disrupted my business model. However, this has been a mixed blessing because most of my sales now-days are for Arduino users using my muvium based emulation tools which I retargeted as the Arduino Toolkit in VirtualBreadboard.
That would work as long as you also got a written exception from anyone who contributed GPL changes to the original Parallax GPL code, as Parallax would not be able to provide you with a non-GPL license for other peoples GPL contributions.
That would work as long as you also got a written exception from anyone who contributed GPL changes to the original Parallax GPL code, as Parallax would not be able to provide you with a non-GPL license for other peoples GPL contributions.
It's also one of the reasons to keep the verilog private but actually I don't expect to make *any* changes to the P1 core itself, more like a SoC concept of having 2 chips on 1 die. It might be even possible to use the existing P1 image.. This way it's possible to use an actual P1 alongside an actual PIC during development.
The bigger chunk of work is to integrate the ae18 core because it will need FLASH and security and also I want to add some optimization features and instructions to better support my java runtime.
The idea of binding a soft-peripheral core to a PIC18 core is something I have wanted to do for a long time (google 'java dust leros' ) but opening the P1 creates a new opportunity that I am exploring here.
VBB is fascinating. Missed your earlier post about it. I can see the Propeller being a good fit for such a setup.
Thanks Tubular - Well of course I think so also :-)
VBB is very much inline with the practical users common to parallax and Arduino and I have a whole bunch of technology that I have been working on in one way or another since 1999 just itching to get out there. The challenge is to figure out a pathway with the right balance with open-source and business.
Keep in mind none of the Arduino founders do it full time! They all have day jobs ( or have left the project altogether ) I just fancy to be able to do what I love full time and that takes a business model!
It's also one of the reasons to keep the verilog private but actually I don't expect to make *any* changes to the P1 core itself, more like a SoC concept of having 2 chips on 1 die. It might be even possible to use the existing P1 image.. This way it's possible to use an actual P1 alongside an actual PIC during development.
The bigger chunk of work is to integrate the ae18 core because it will need FLASH and security and also I want to add some optimization features and instructions to better support my java runtime.
The idea of binding a soft-peripheral core to a PIC18 core is something I have wanted to do for a long time (google 'java dust leros' ) but opening the P1 creates a new opportunity that I am exploring here.
There are a lot of good answers here already. The answer of course is yes, but there ae several steps and they do of course include solving the financing!
BTW Parallax team, (Chip and Ken in particular) Congratulations on the Propellor, It has a real ability to offer a very great deal to this industry.
There are so many options for delivering a device now, that without even taking into account the possibly of a very large number of (potentially) good competing ideas for the device functionality, there is going tobe some real thinking time needed, somewhere and I don't think that Parallax can or should carry the load by itself.
We want more from the Propellor so we the users should find a way to contribute more and if it helps make Parallax into a bigger employer/manufacturer, I think that it is a poaitive outcome! I have had my Intellectual Property stolen, I know what thats like, I think crowd sourcing is one way to kill that process stone-dead.
I would also like to point out that there are many products that ship with FPGA's inside them. A lot of comms gear, Routers of all sizes, types and brands being a good example, are shipped with the manufactured product having FPGAs sitting on the main board, on daughter boards and even on plug-in cards. There are many benefits to this of course or it wouldn't be a relatively common practice. There are downsides but they can always be beaten.
A few benefits of shipping programmed FPGA's in a product that might be considered are:
Security of Intellectual Property (IP) from conception to final assembly
Security after delivery as many FPGA products have security features that enable the erasure or scrambling of the FPGA Cells if they are attacked or probed.
Cost-Benefit issues
and there are quite a few more. There are of course, always handling, soldering and packaging issues (in the trade and community) that need to be taken into account but these must be considered whatever the device configuration, just perhaps less with DiP than some of the other higher density packages as has been already pointed out.
To explain that by no-means comprehensive ( 3 point) list (especially the last) if one knows that there is time pressures to get your product to market and that you will likely be doing revisions to the hardware through the working life of a product the FPGA offers great advantages in all three points above.
While this is of course, in a proprietary and highly competitive (often pricey) environment, there is no reason why one cannot consider looking at the choices that big manufacturers with relatively small production runs on any hardware revision might make and see if there is a way to modify or shoe-horn their approaches to suit the smaller manufacturer, maker or hobbyist and I think there is.
One real advantage of the FPGA is that (assuming you understand the definition languages) no single one revision need to be chosen as the only production product. The Verilog of a few designs can be kept and maintained allowing some greater options to be available.
To give some examples, I really would love a version of (the truly innovative Propellor) that I can run in 500 Mhz and 1Ghz (yes I know the timing issues, board layout problems etc but nothing is insurmountable), with extra RAM for each cog, LVDS and at least two CSI-2 (or better) interfaces, HDMI plus all the good things that currently exist in the propellor and at least one version with no- mulytiplexing of all those extra pins. This of course is not the extent to my desire, I want some lower speed but secure Propellors with hardware ID and support for SHA-2.
In short there is no shortage of needs and desires out there and it may be that at least until the most popular versions "shake-out" of the mix and FPGA based product is a way out.
Someone more published and perhaps more articulate than I recently said " That to stay static in electronics technology (or any technology), while everyone else is moving forward is really to go backwards"!
I don't understand why any "community support" is needed here. If no changes to P1 are to be made, then the obvious approach is to ask Parallax for a non GPL license. And there would be financials involved most likely.
Anything that runs on a real Propeller is useful, and would likely see some interest.
From there, whatever you do end up with in silicon may or may not be of interest. Depends on what using it requires people to do. It having a P1 in it, might be interesting, depending on what it takes to get to the P1.
As for transparent peripherials, that seems doable with the MIT code out there, in addition to some code you or others may contribute. If those peripherials are MIT as well, you may see some interest.
Otherwise, all I see is your closed thing. How does "the community" play a role here?
Otherwise, all I see is your closed thing. How does "the community" play a role here?
Well in the same way that 'ARM' licenses their cores to ATMEL and so on to create new devices that communities build open-source hardware/software around. I don't see Arduino releasing the AVR Verilog when they release an open-source hardware/software platform so it's well accepted to build open platforms in this way
The 'community' I am talking about is not a verilog community, it's an open-source propeller object soft-peripheral and applications community. It's similar to the way arduino developed open-source libraries that people can use as objects and it's also similar to OBEX
Similar but not the same.
There are distinct benefits to keeping the same P1 core
* All libraries developed can be shared with anyone else who develops a propeller based chip
* All libraries can also be used with a real propeller
* All the tools, emulators and compilers continue to work
* OBEX objects can be forked to create the new drivers
* All existing languages C etc can be used to create drivers
What I am proposing is a soft-peripheral framework within which propeller code can be refactored and binded as objects within a genuine object oriented runtime with multiple language syntax's and compiler support for transparently binding and dynamically creating the objects with one of several industry standard high level languages, Java, Boo (python), VisualBasic
My proposal is a step-by-step approach building on already working components with no single 'big step' so it's a very pragmatic proposal that can quickly get hardware into hands and build from there.
I am not really sure what you would suggest as an alternative business case but perhaps I will learn as other alternative projects form and thrive. I am 'open' to that :-)
I have a very sophisticated ahead-of-time java compiler technology based on the PIC18 micro that I would like to 'secure' by embedding in silicon...
The P2 has security features and I would also suggest you look at if that claimed very sophisticated ahead-of-time java compiler technology can be recoded to fit into a P1V (or P2) COG.
The good thing about a P1V, is you can scale one COG to focus on a specific application, whilst keeping the others to some 'standard'.
Also be aware that any process with FLASH will be more expensive to develop, than ROM + Loader, and the MHz will be more compromised.
The P2 has security features and I would also suggest you look at if that claimed very sophisticated ahead-of-time java compiler technology can be recoded to fit into a P1V (or P2) COG.
Yes, anything is possible but this takes years of work. I am only really interested in taking what works today and starting from there.
The good thing about a P1V, is you can scale one COG to focus on a specific application, whilst keeping the others to some 'standard'.
Blaspheme!
Well, actually. I absolutely agree with this. I understand the original idea that Chip adhered to, where every cog was interchangeable with every other cog. During the P2 development discussions, the idea of having asymmetric cogs came up a couple of times, and they were all shot down.
But, now we have a chance to see whether symmetry really is important or not! In the end, we may find that just a few variations will cover the majority of use cases. Arguably, as long as all of the chip variations are package and pin-compatible, it would still be possible for projects to "evolve" by selecting a different chip with a different mix that meets whatever the new requirements are.
And before someone suggests that this could end up being like Microchip's offerings, where there are a myriad of choices, I think this is where the overall Propeller concept would shine. Because there would still be cogs, a lot of those variations (in Microchip's offerings) would be dealt with in software.
I'm working on a System Verilog port of the propeller.
I plan to keep the core the same. However, where unused space exist I will add extra features. These features will all be controllable via a global header file where you can enable them as you like. Each feature of course will cost logic, however.
For example, I'll add a feature to turn on the mul, muls, enc, and ones opcodes. I'll also add a feature to allow you to easily change the number of cogs. Additionally, I'll have features for you to be able to split the hub up into multiple slices that can be accessed by each cog at the same time. Etc.
By designing the code this way the community will be able to add extensions to the code for everyone to use while maintaining backward compatibility.
Comments
I don't know anything about chip design but it seems to me that it should be possible to modify the Verilog so that the boot code, spin interpreter etc. are in "shadowed" ROM/RAM, making it possible to use the full 64K (not 60K) memory area as RAM with backwards compatibility:
- The upper memory area in the hub is connected to both ROM and RAM.
- Initially, when the upper HUB memory is being read, it reads the ROM and writes the RAM ("copy mode" or "shadow mode").
- Any write attempt to an address of $8000 or higher results in the HUB memory being switched to "normal mode" or "RAM mode": all further reads will happen from the RAM; the ROM remains disabled.
This will change $8000-$FFFF into writeable memory that's initialized from the ROM. This would be completely compatible with existing Spin and PASM code which only reads from those locations. Obviously the boot loader would still only load 32K from the EEPROM but if you have a program that's bigger than 32K, that's easy to work around.
The only issue is that if you want to use the RAM but also need the ROM data, you have to read the area you need before doing the first write to any address above $8000. That should be easy to accomplish, and we (or Chip) might even consider to take care of this in a slightly modified boot ROM: all that's needed is a small loop that reads all data from $8000-$FFFF before it launches the spin runner. (Booting would take a few microseconds longer than before, so maybe that would require a vote to find out if that's critical to anyone).
===Jac
No formalized design, no testing or verification, no benchmarks, etc.
Before people start talking about financing via kickstarter how about talking to people who went through this process to get working silicon first and get some rough cost estimates.
Yeah people talk about Adapteva, who used kickstarter to do initial financing. But they also had 3 experienced IC designers who know the ropes And guess what? That amount of money wasn't enough, they almost went under until they got a $3.6 million cash infusion from Ericsson and a VC guardian angel.
Maybe if Chip was willing to assist and run the show, it would be doable. But I wouldn't trust it in the hands of someone with zero experience in the process.
Problem is, you have now arrived back at the P2 Station.
The "P2 pin drivers and analog circuitry " are on the ring, and they define the die area, so the P1.5 becomes pretty much the same area as P2, but with a lot less core power.
P2 peripherals can be tested on P1V, and Chip has indicated he is keen to do that.
Also P2Core can be tested on all these P1V Capable boards that are exploding out of the woodwork.
I'd say 180nm + Analog needs to be P2, anything else is dilution and delay.
P1V is what many are calling the Verilog variants of P1.
I asked Chip earlier about testing aspects of the P2 Smart Pins using P1V, and he sounded keen.
The Verilog code is not complex, or anything special, but the config and use cases are a little trickier
The Epiphany chip, the interesting part with 16 floating point cores, was already designed and made by the time of the Kickstarter campaign. The Kickstarter was only to get the Parallella board designed and produced, which moved the Epiphany chip from an ARM + FPGA + Epiphany dev board to the Xilinx Zynq based board we have now.
BTW Parallax team, (Chip and Ken in particular) Congratulations on the Propellor, It has a real ability to offer a very great deal to this industry.
There are so many options for delivering a device now, that without even taking into account the possibly of a very large number of (potentially) good competing ideas for the device functionality, there is going tobe some real thinking time needed, somewhere and I don't think that Parallax can or should carry the load by itself.
We want more from the Propellor so we the users should find a way to contribute more and if it helps make Parallax into a bigger employer/manufacturer, I think that it is a poaitive outcome! I have had my Intellectual Property stolen, I know what thats like, I think crowd sourcing is one way to kill that process stone-dead.
I would also like to point out that there are many products that ship with FPGA's inside them. A lot of comms gear, Routers of all sizes, types and brands being a good example, are shipped with the manufactured product having FPGAs sitting on the main board, on daughter boards and even on plug-in cards. There are many benefits to this of course or it wouldn't be a relatively common practice. There are downsides but they can always be beaten.
A few benefits of shipping programmed FPGA's in a product that might be considered are:
- Security of Intellectual Property (IP) from conception to final assembly
- Security after delivery as many FPGA products have security features that enable the erasure or scrambling of the FPGA Cells if they are attacked or probed.
- Cost-Benefit issues
and there are quite a few more. There are of course, always handling, soldering and packaging issues (in the trade and community) that need to be taken into account but these must be considered whatever the device configuration, just perhaps less with DiP than some of the other higher density packages as has been already pointed out.To explain that by no-means comprehensive ( 3 point) list (especially the last) if one knows that there is time pressures to get your product to market and that you will likely be doing revisions to the hardware through the working life of a product the FPGA offers great advantages in all three points above.
While this is of course, in a proprietary and highly competitive (often pricey) environment, there is no reason why one cannot consider looking at the choices that big manufacturers with relatively small production runs on any hardware revision might make and see if there is a way to modify or shoe-horn their approaches to suit the smaller manufacturer, maker or hobbyist and I think there is.
One real advantage of the FPGA is that (assuming you understand the definition languages) no single one revision need to be chosen as the only production product. The Verilog of a few designs can be kept and maintained allowing some greater options to be available.
To give some examples, I really would love a version of (the truly innovative Propellor) that I can run in 500 Mhz and 1Ghz (yes I know the timing issues, board layout problems etc but nothing is insurmountable), with extra RAM for each cog, LVDS and at least two CSI-2 (or better) interfaces, HDMI plus all the good things that currently exist in the propellor and at least one version with no- mulytiplexing of all those extra pins. This of course is not the extent to my desire, I want some lower speed but secure Propellors with hardware ID and support for SHA-2.
In short there is no shortage of needs and desires out there and it may be that at least until the most popular versions "shake-out" of the mix and FPGA based product is a way out.
Someone more published and perhaps more articulate than I recently said " That to stay static in electronics technology (or any technology), while everyone else is moving forward is really to go backwards"!
Cheers all!
Keep up the great work!
I noticed there is at least one P1V supported FPGA board that has HDMI ability
http://pipistrello.saanlima.com/index.php?title=Welcome_to_Pipistrello
and if SPEED matters above all else, maybe someone will port to Achronix devices and report some numbers ?
http://en.wikipedia.org/wiki/Achronix
I would like to guage community support for the following step-wise approach
1) Create a module with a PIC18 + Propeller and crowdfund that module
I am already working on a similar module, it might be a derivative of this Arduino footprint product I am developing
http://forums.parallax.com/showthread.php/156000-Fastest-Possible-FIFO-Buffer?highlight=iceshield
2) With the new module community develop an abstract soft-peripheral library that uses the propeller transparently through java objects similar to Arduino ( but dynamic! )
eg I2CMaster i2c = new I2CMaster();
3) Merge the P1 core with a AE18 core ( http://opencores.org/project,ae18 ) to create an FPGA version of the module
- Requires a FLASH process
- Requires security
- Will be closed source, perhaps with only a small team of core community developers, all other developers use the module from #1
- No changes to P1 would be made
4) Crowdfund an ASIC from the FPGA design
- Ideally put a wireless radio on the chip for wireless ( stretch goal? )
- I know a world class ASIC design who can help with the wireless module
I already have sophisticated tools that work with my java AOT compiler ( called muvium ) based on the PIC18 architecture.
* Frappucino - Arduino framework
* VirtualBreadboard development and emulation environment
* Boogie - Python compiler with java emitter
* VBEE - Visual Basic compiler with java emitter
* Java - javac of course
* JDWP - Java debug wire protocol for debugging in eclipse/netbeans
* Eclipse plugin
* Netbeans plugin
Muvium has been in development since 2003 and I use it in my own products such as the VirtualShield and WaveRaster. I have not released it broadly because discussions with microchip keep going in circles ( they benefit most if I just release it open-source) and also Arduino disrupted my business model. However, this has been a mixed blessing because most of my sales now-days are for Arduino users using my muvium based emulation tools which I retargeted as the Arduino Toolkit in VirtualBreadboard.
Only the verilog. The peripheral libraries would be open source and run on the regular propeller also.
Maybe your right about no takers but then it's not worth me doing it either.
I will keep an eye out to see if your right about the emergence of a 'pure' open-source community.
Maybe I will learn something new
The assumption is the ASIC would be manufactured in conjunction with parallax ( as the major partner ) who would issue a non GPL license for the core.
It's also one of the reasons to keep the verilog private but actually I don't expect to make *any* changes to the P1 core itself, more like a SoC concept of having 2 chips on 1 die. It might be even possible to use the existing P1 image.. This way it's possible to use an actual P1 alongside an actual PIC during development.
The bigger chunk of work is to integrate the ae18 core because it will need FLASH and security and also I want to add some optimization features and instructions to better support my java runtime.
The idea of binding a soft-peripheral core to a PIC18 core is something I have wanted to do for a long time (google 'java dust leros' ) but opening the P1 creates a new opportunity that I am exploring here.
Thanks Tubular - Well of course I think so also :-)
VBB is very much inline with the practical users common to parallax and Arduino and I have a whole bunch of technology that I have been working on in one way or another since 1999 just itching to get out there. The challenge is to figure out a pathway with the right balance with open-source and business.
Keep in mind none of the Arduino founders do it full time! They all have day jobs ( or have left the project altogether ) I just fancy to be able to do what I love full time and that takes a business model!
Welcome to the Forums!
Nice write up.
Just a heads up, it's a Propeller not Propellor.
Anything that runs on a real Propeller is useful, and would likely see some interest.
From there, whatever you do end up with in silicon may or may not be of interest. Depends on what using it requires people to do. It having a P1 in it, might be interesting, depending on what it takes to get to the P1.
As for transparent peripherials, that seems doable with the MIT code out there, in addition to some code you or others may contribute. If those peripherials are MIT as well, you may see some interest.
Otherwise, all I see is your closed thing. How does "the community" play a role here?
Well in the same way that 'ARM' licenses their cores to ATMEL and so on to create new devices that communities build open-source hardware/software around. I don't see Arduino releasing the AVR Verilog when they release an open-source hardware/software platform so it's well accepted to build open platforms in this way
The 'community' I am talking about is not a verilog community, it's an open-source propeller object soft-peripheral and applications community. It's similar to the way arduino developed open-source libraries that people can use as objects and it's also similar to OBEX
Similar but not the same.
There are distinct benefits to keeping the same P1 core
* All libraries developed can be shared with anyone else who develops a propeller based chip
* All libraries can also be used with a real propeller
* All the tools, emulators and compilers continue to work
* OBEX objects can be forked to create the new drivers
* All existing languages C etc can be used to create drivers
What I am proposing is a soft-peripheral framework within which propeller code can be refactored and binded as objects within a genuine object oriented runtime with multiple language syntax's and compiler support for transparently binding and dynamically creating the objects with one of several industry standard high level languages, Java, Boo (python), VisualBasic
My proposal is a step-by-step approach building on already working components with no single 'big step' so it's a very pragmatic proposal that can quickly get hardware into hands and build from there.
I am not really sure what you would suggest as an alternative business case but perhaps I will learn as other alternative projects form and thrive. I am 'open' to that :-)
The good thing about a P1V, is you can scale one COG to focus on a specific application, whilst keeping the others to some 'standard'.
Also be aware that any process with FLASH will be more expensive to develop, than ROM + Loader, and the MHz will be more compromised.
Yes, anything is possible but this takes years of work. I am only really interested in taking what works today and starting from there.
Yes, I agree, this may well be the show-stopper and is probably the biggest risk factor for getting to an actual ASIC.
Blaspheme!
Well, actually. I absolutely agree with this. I understand the original idea that Chip adhered to, where every cog was interchangeable with every other cog. During the P2 development discussions, the idea of having asymmetric cogs came up a couple of times, and they were all shot down.
But, now we have a chance to see whether symmetry really is important or not! In the end, we may find that just a few variations will cover the majority of use cases. Arguably, as long as all of the chip variations are package and pin-compatible, it would still be possible for projects to "evolve" by selecting a different chip with a different mix that meets whatever the new requirements are.
And before someone suggests that this could end up being like Microchip's offerings, where there are a myriad of choices, I think this is where the overall Propeller concept would shine. Because there would still be cogs, a lot of those variations (in Microchip's offerings) would be dealt with in software.
I plan to keep the core the same. However, where unused space exist I will add extra features. These features will all be controllable via a global header file where you can enable them as you like. Each feature of course will cost logic, however.
For example, I'll add a feature to turn on the mul, muls, enc, and ones opcodes. I'll also add a feature to allow you to easily change the number of cogs. Additionally, I'll have features for you to be able to split the hub up into multiple slices that can be accessed by each cog at the same time. Etc.
By designing the code this way the community will be able to add extensions to the code for everyone to use while maintaining backward compatibility.