Most of the size of the OpenSpin executable is because of Windows stuff, not because it's written with some C++ in it (it has a fair bit of just plain C).
It does allocate a few buffers that are several megabytes for various things, and those could be made smaller or reworked, but is it really worth it? The platforms that actually matter have way more memory than needed, even if you include small computers like the BBB or RasPi. Is it really meaningful to even talk about platforms that no one will use except to prove a point?
Honestly, by comparison to most compilers on modern platforms, OpenSpin is pretty small.
RE: Sphynx
If I recall correctly, it's not complete.
Maybe they've decided to release the Verilog code so we can all play with our ideas for extending P1!
Since Ken is directly involved in this, I am guessing that the Wednesday announcement has two parts: 1) P1 in Verilog, with perhaps a few nifty enhancements. 2) A new Parallax FPGA board on which to run it. (Of course this new board is also for P2 Verilog, which will follow shortly.)
My own solution to a self-hosted enviroment was to toss another $4.00 chip into the mix with an on-board editor.
Combined with Michael Park's software, you've got a reasonable system.
My own solution to a self-hosted enviroment was to toss another $4.00 chip into the mix with an on-board editor.
Combined with Michael Park's software, you've got a reasonable system.
Here's a couple videos... I'm using a Propeller and Micromite with Sphinx as the compiler.
I think the problem is that "self hosted" in this context means the development environment running directly on the Propeller not on another chip even if it is only $4.
Nah! That's not "It's going to open doors to people innovating on the current design."
So it's probably some additional software or specification info.
I'd agree. The talk of self-hosting is a reminder that cloud hosting is getting more important, and I know they have been working on iPad (browser) hosts.
One item that is missing from the Prop tool box, is a good Visual Simulator, so maybe Chip was helping nail a Cloud Simulator ? That would fit the teaser.
Self-hosting has little practical value, it is nice 'to prove you can do it', but everyone has far more power in their other hand these days so the focus should go into ease of use & better education tools.
Sphinx supports many features of Spin and PASM, but it does have some limitations. Michael Park integrated the Sphinx compiler with keyboard, display and SD file drivers that executed directly within several cogs. This freed up hub RAM for the compiler. However, the 32K hub RAM does limit the number of symbols that can be defined. There are also some features of the Spin language that the sphinx compiler doesn't support to save memory.
I've recently ported the Sphinx compiler to spinix, where I substituted the keyboard, display and SD drivers with modified FullDuplexSerial and FSRW objects. This uses up some of the hub RAM, but it still allows for compiling medium size objects.
The Sphinx compiler is broken up into three components -- lex, codegen and link. The three component parse the source code, generate object files and then link multiple object files into an executable binary file.
I've also written a Spin compiler called spinit. It compiles Spin code to Spasm code, which is a Spin assembly language. This is assemble with a utility called spasm, that generates object files. I then link the objects with a utility called splink.
The sphinx compiler runs much faster than spinit/spasm because it uses a dedicated cog engine to do list searches. I may incorporate that into spinit and spasm at some point to speed them up.
A Parallax fpga board with P1 code might be it.
That doesn't get me very excited though...
I guess I'll dream about a P1B with 64-IO and maybe more RAM (at least until Wednesday)...
A Parallax FPGA board along with P1 source in Verilog would be interesting because you could experiment with different design changes. Just a binary P1 image wouldn't be that interesting since we already have the chip. Maybe a P1 with more hub memory would be fun to play with but I wouldn't put much time into it unless it was going to be followed by a real chip.
I was looking for the list of minimum customer requirements for the P2.
IIRC they were these..
More hub RAM
More I/O
Security
Faster
C software
I am not sure if Analog I/O was on the list.
From my post about "what ifs", all the above were covered. I would like to add...
Call/Jmp/Ret instruction (for hubexec mode)
Extend the PC so that we can run hubexec mode
Without any extras, we would get a performance boost, and simplification of hubexec mode from using LMM.
Maybe this could be implemented in the same feature size as the P1, or the next size down???
BTW I wasn't considering the new dual port cog ram or anything else from the P2.
KISS and get it out and call it the P2, and the current P2 will be a P3.
I am dreaming about a new P1. Not P1B. Pin compatible to P1. Just faster and with loadable/replaceable ROM. Same instruction set.
I think it would be a smart decision for Parallax.
Existing customers can use it in existing designs. It may replace P1 on the long run but that does not really matter since both are sold by Parallax and the customer stays Parallax customer.
The loader(s) needs to get changed to include the ROM part of P1 (fonts/sin//etc.) if needed by the Application. Else ex ROM can be treated as additional RAM with current instruction set.
Even without support of SPIN/C/FORTH. You can use it as video/serial/whatever buffer.
In a smaller process as now the chip would be faster. How much? @Chip may be able to find out.
Support for a ~45K RAM in SPIN/C/FORTH/etc. might not be problematic at all. Them all can access the (current) ROM area without problems.
SHORT:
this P1 compatible chip, faster then P1 and with some more RAM would be the perfect upgrade for any commercial P1 project without any problems!
RESULT:
immediate orders from all existing customers (including US). May refinance part of the costs. Remember always that it needs to be sustainable.
SIDE-EFFECTS:
Current customers can get that little bit more 'umpf' missing in the current project if needed else they can still use the current P1. So they stay and wait for the P2 to arrive instead of leaving.
Unified dev-tools for @Chip. BIG one. Just in case all mask sets for the P1 needs to be recreated, can nowadays foundries work with them files created by chip 10++ years ago for the P1?
Press Releases can prove that besides developing an new chip Parallax still takes care of all their products even after 10++ years. Can be a good thing to present instead of life cycles of 3-5 years.
DISCLAIMER:
see Loopys signature. I could not word it better.
P1 compatible chip, faster than P1 with enough additonal I/O.. RAM problem solves itself.
You can't go above 64k of hub memory without breaking the Spin VM as I believe it only uses 16 bit addresses. That is if you care about binary compatability.
You can't go above 64k of hub memory without breaking the Spin VM as I believe it only uses 16 bit addresses. That is if you care about binary compatability.
Neither Cluso99 or Mike were asking for more than 64K of ram. Cluso99 only asked for more and Mike asked for ~45K by replacing the current rom with ram. A smaller process to increase speed and using as much of the current addressable memory space for ram would be a good way to go if it is economically feasible. A tiny amount of rom to boot from the eeprom is all that is needed.
I should not complain about the memory usage of OpenSpin. It is after all a tiny program by modern standards. The stripped executable is less than 200KB and if it uses a few megs at run time that is drop in the ocean on any desktop machine.
I should not complain...but, as we are here I will...:)
There are currently two cases where pruning those last unnecessary megabytes of RAM usage would be desirable:
1) When running openspin in the browser.
Now I know compiling C/C++ into JavaScript produces a huge amounts of JS but when one actually runs openspin.js it gobs up a lot of megabytes in the browser. Not so nice.
2) We are now running OpenSpin on WIFI routers running OpenWRT. Those tiny MIPS based routers that can be had for 15 dollars from China. We can compile Spin and load a Propeller over the net with those. Those machines only have 32MB of RAM. OpenSpin takes a lot of that.
Is it really meaningful to even talk about platforms that no one will use except to prove a point?
I will leave it to you to judge if anyone will ever use either of the above platforms or if the number of potential users is worth the effort.
If a new P1 were to be done, then it needs more I/O as that is the volume users requirement.
Keeping the die geometry the same as P1, with a slightly larger die, 64KB hub RAM and 48-64 I/O should be possible.
Perhaps it could still fit in a DIP40 by not connecting the extra I/O.
If the die geometry were shrunk just to the next smaller size, perhaps there could be more than 64KB hub RAM. Might still be possible for a DIP40 version.
>64KB hub RAM only causes a problem with the current spin interpreter. Since it would be soft, a new version supporting >64KB would be possible. I have already obtained ~25% speed improvement by being soft and using a small amount of hub ram.
BTW I am not sure there is sufficient demand for DIP40 for Parallax to offer this.
If it were done, now or later, I would like to see alternative EEPROM or SPI options.
ROM should be tiny, just enough to boot and perhaps a little monitor (cut down P2).
Don't forget the security!
Just consider this... P1 running at 160MHz (200MHz overclocked??), 64+KB Hub RAM, 1:16 hub access + 16 Cogs switchable to 1:8 + 8 Cogs, 48-64 I/O. I am drooling Pretty sure Chip could do this in his sleep and we could see production in 3 months!!!
Keeping the die geometry the same as P1, with a slightly larger die, 64KB hub RAM and 48-64 I/O should be possible.
Perhaps it could still fit in a DIP40 by not connecting the extra I/O.
The present manually crafted die, is MAX'd for the package.
A Synthesised version will be MUCH larger.
If the die geometry were shrunk just to the next smaller size, perhaps there could be more than 64KB hub RAM. Might still be possible for a DIP40 version.
- but you still need VccCore ?
Unless Parallax can find a process that includes on-chip VccCore, and can also give 2.5-5.5V Vcc, I just do not see enough market space between P1 and P2 to bother with ?
I'm interested, who is actually developing for Propeller at the moment and who is just waiting for P2?
(I realise I am posting this in a P2 thread so the results may be a little skewed)
Myself, I have a number of P1 projects on the go with two boards due this week, 1 updated and 1 new.
I only use P1 in relatively small numbers but mostly commercially but I also like to have fun with them....
Are people who use this forum mostly from a hobby/fun perspective, commercial users or like me, both?
As for the P1 announcement coming, whatever it is I will welcome it if it helps further promote and increase it's longevity, after all it's the only Prop we have got ;-)
I have a brand new DIP Prop 1 running on a proto-board as we speak. Not much actual Prop code happening though. I'm using it to test possibilities of programming Propellers from cheap WIFI routers via their on chip UART and a GPIO pin fro reset. So far propeller-load and openspin run fine on my d-link router.
I'm always looking out for Propeller opportunities at work but we generally need a lot more code and data space and networking so it's ARMs all around there.
I have a brand new DIP Prop 1 running on a proto-board as we speak. Not much actual Prop code happening though. I'm using it to test possibilities of programming Propellers from cheap WIFI routers via their on chip UART and a GPIO pin fro reset. So far propeller-load and openspin run fine on my d-link router.
I'm always looking out for Propeller opportunities at work but we generally need a lot more code and data space and networking so it's ARMs all around there.
That sounds interesting, is the plan for you to host a full dev system on the router?
The reason I asked the question above is to work out if the P2 has in fact had a negative impact on P1 development, certainly the P1 forum seems to have 'slowed' since the P2 forum was started.
Has it stifled innovation and development because people could instantly see they could do more with P2?
I've been using P1 for 7 years now and I know I have only just scratched the surface with it, there's so much more we can do with it, I am sure of that at least ;-)
Oh yes, a full dev system. That is: propeller-load, openspin and vim
It's all working here on my d-link.
But you raise an interesting point. You see, being a router it has a web server for it's web configuration interface. We happen to have a working Spin IDE running in the browser. Syntax highlighting editor and all. That web IDE sends binaries back to the server for loading into a Propeller. All we need to do is arrange for the router to serve up that Spin web IDE.
Sadly my particular router does not have enough file system space for that.
Why not just use a RasPi with a Wifi dongle? Or am I missing something?
1) Cost is a lot to do with it.
I have two old d-links here, one is the WIFI router for the house. The other was spare in the junk pile. Now doing service as a giant Prop Plug.
A Raspberry Pi will cost be 40 to 5 euros depending on where I pick one up around here. The WIFI dongle a further 10 euro or so.
Cluso and Loopy are getting tiny little routers for 20 dollars it seems.
2) Size. These little routers are smaller than a Pi. The board inside is even smaller still or course.
3) To easy:) I already have propeller-load, openspin and even SimpleIDE running on the Pi. One thing that did not work reliably my on the Pi was in fact my wifi dongles. I know guys who have the same hardware and it works well for them. It's just a local problem to me.
4) The routers have that nice web interface that comes with OpenWRT. All that nice easy fire wall set up and such.
5) I never used a MIPS processor before!
Mostly for me though it's because Loopy and Cluso were try this and Loopy cajoled me into helping out. I do have a bunch of Raspi's here so I will for sure be dedicating one of my old 256MB versions to OpenWRT. Just because.
Is it just me that can't see it being an upgraded P1, as forgive me if I'm wrong, but isn't the P2 exactly that?
A faster P1 with more cogs, more ram and a good few extras
I can see it being a P1 FPGA edition maybe, but what I can't understand there, is why would Ken take Chip off the P2 dev to do this P1 thing so close to P2's completion, unless it was something as quick as his P1 (as it is) FPGA image.
Comments
It does allocate a few buffers that are several megabytes for various things, and those could be made smaller or reworked, but is it really worth it? The platforms that actually matter have way more memory than needed, even if you include small computers like the BBB or RasPi. Is it really meaningful to even talk about platforms that no one will use except to prove a point?
Honestly, by comparison to most compilers on modern platforms, OpenSpin is pretty small.
RE: Sphynx
If I recall correctly, it's not complete.
Since Ken is directly involved in this, I am guessing that the Wednesday announcement has two parts: 1) P1 in Verilog, with perhaps a few nifty enhancements. 2) A new Parallax FPGA board on which to run it. (Of course this new board is also for P2 Verilog, which will follow shortly.)
Combined with Michael Park's software, you've got a reasonable system.
Jeff
I don't follow. How is that self-hosted?
I'd agree. The talk of self-hosting is a reminder that cloud hosting is getting more important, and I know they have been working on iPad (browser) hosts.
One item that is missing from the Prop tool box, is a good Visual Simulator, so maybe Chip was helping nail a Cloud Simulator ? That would fit the teaser.
Self-hosting has little practical value, it is nice 'to prove you can do it', but everyone has far more power in their other hand these days so the focus should go into ease of use & better education tools.
I've recently ported the Sphinx compiler to spinix, where I substituted the keyboard, display and SD drivers with modified FullDuplexSerial and FSRW objects. This uses up some of the hub RAM, but it still allows for compiling medium size objects.
The Sphinx compiler is broken up into three components -- lex, codegen and link. The three component parse the source code, generate object files and then link multiple object files into an executable binary file.
I've also written a Spin compiler called spinit. It compiles Spin code to Spasm code, which is a Spin assembly language. This is assemble with a utility called spasm, that generates object files. I then link the objects with a utility called splink.
The sphinx compiler runs much faster than spinit/spasm because it uses a dedicated cog engine to do list searches. I may incorporate that into spinit and spasm at some point to speed them up.
That doesn't get me very excited though...
I guess I'll dream about a P1B with 64-IO and maybe more RAM (at least until Wednesday)...
+1 Really wish this was in the cards somewhere. Add an extra cog or two would really make this a spicy dream for me.
IIRC they were these..
- More hub RAM
- More I/O
- Security
- Faster
- C software
I am not sure if Analog I/O was on the list.From my post about "what ifs", all the above were covered. I would like to add...
- Call/Jmp/Ret instruction (for hubexec mode)
- Extend the PC so that we can run hubexec mode
Without any extras, we would get a performance boost, and simplification of hubexec mode from using LMM.Maybe this could be implemented in the same feature size as the P1, or the next size down???
BTW I wasn't considering the new dual port cog ram or anything else from the P2.
KISS and get it out and call it the P2, and the current P2 will be a P3.
I think it would be a smart decision for Parallax.
Existing customers can use it in existing designs. It may replace P1 on the long run but that does not really matter since both are sold by Parallax and the customer stays Parallax customer.
The loader(s) needs to get changed to include the ROM part of P1 (fonts/sin//etc.) if needed by the Application. Else ex ROM can be treated as additional RAM with current instruction set.
Even without support of SPIN/C/FORTH. You can use it as video/serial/whatever buffer.
In a smaller process as now the chip would be faster. How much? @Chip may be able to find out.
Support for a ~45K RAM in SPIN/C/FORTH/etc. might not be problematic at all. Them all can access the (current) ROM area without problems.
SHORT:
this P1 compatible chip, faster then P1 and with some more RAM would be the perfect upgrade for any commercial P1 project without any problems!
RESULT:
immediate orders from all existing customers (including US). May refinance part of the costs. Remember always that it needs to be sustainable.
SIDE-EFFECTS:
Current customers can get that little bit more 'umpf' missing in the current project if needed else they can still use the current P1. So they stay and wait for the P2 to arrive instead of leaving.
Unified dev-tools for @Chip. BIG one. Just in case all mask sets for the P1 needs to be recreated, can nowadays foundries work with them files created by chip 10++ years ago for the P1?
Press Releases can prove that besides developing an new chip Parallax still takes care of all their products even after 10++ years. Can be a good thing to present instead of life cycles of 3-5 years.
DISCLAIMER:
see Loopys signature. I could not word it better.
Enjoy!
Mike
P1 compatible chip, faster than P1 with enough additonal I/O.. RAM problem solves itself.
Neither Cluso99 or Mike were asking for more than 64K of ram. Cluso99 only asked for more and Mike asked for ~45K by replacing the current rom with ram. A smaller process to increase speed and using as much of the current addressable memory space for ram would be a good way to go if it is economically feasible. A tiny amount of rom to boot from the eeprom is all that is needed.
I should not complain about the memory usage of OpenSpin. It is after all a tiny program by modern standards. The stripped executable is less than 200KB and if it uses a few megs at run time that is drop in the ocean on any desktop machine.
I should not complain...but, as we are here I will...:)
There are currently two cases where pruning those last unnecessary megabytes of RAM usage would be desirable:
1) When running openspin in the browser.
Now I know compiling C/C++ into JavaScript produces a huge amounts of JS but when one actually runs openspin.js it gobs up a lot of megabytes in the browser. Not so nice.
2) We are now running OpenSpin on WIFI routers running OpenWRT. Those tiny MIPS based routers that can be had for 15 dollars from China. We can compile Spin and load a Propeller over the net with those. Those machines only have 32MB of RAM. OpenSpin takes a lot of that. I will leave it to you to judge if anyone will ever use either of the above platforms or if the number of potential users is worth the effort.
One of those is coming, it's called P2
Anything in the new process will need Separate VccCore operation, and 64 io is == P2.
Keeping the die geometry the same as P1, with a slightly larger die, 64KB hub RAM and 48-64 I/O should be possible.
Perhaps it could still fit in a DIP40 by not connecting the extra I/O.
If the die geometry were shrunk just to the next smaller size, perhaps there could be more than 64KB hub RAM. Might still be possible for a DIP40 version.
>64KB hub RAM only causes a problem with the current spin interpreter. Since it would be soft, a new version supporting >64KB would be possible. I have already obtained ~25% speed improvement by being soft and using a small amount of hub ram.
BTW I am not sure there is sufficient demand for DIP40 for Parallax to offer this.
If it were done, now or later, I would like to see alternative EEPROM or SPI options.
ROM should be tiny, just enough to boot and perhaps a little monitor (cut down P2).
Don't forget the security!
Just consider this...
P1 running at 160MHz (200MHz overclocked??), 64+KB Hub RAM, 1:16 hub access + 16 Cogs switchable to 1:8 + 8 Cogs, 48-64 I/O.
I am drooling
Pretty sure Chip could do this in his sleep and we could see production in 3 months!!!
A Synthesised version will be MUCH larger.
- but you still need VccCore ?
Unless Parallax can find a process that includes on-chip VccCore, and can also give 2.5-5.5V Vcc, I just do not see enough market space between P1 and P2 to bother with ?
I'm interested, who is actually developing for Propeller at the moment and who is just waiting for P2?
(I realise I am posting this in a P2 thread so the results may be a little skewed)
Myself, I have a number of P1 projects on the go with two boards due this week, 1 updated and 1 new.
I only use P1 in relatively small numbers but mostly commercially but I also like to have fun with them....
Are people who use this forum mostly from a hobby/fun perspective, commercial users or like me, both?
As for the P1 announcement coming, whatever it is I will welcome it if it helps further promote and increase it's longevity, after all it's the only Prop we have got ;-)
Coley
I'm always looking out for Propeller opportunities at work but we generally need a lot more code and data space and networking so it's ARMs all around there.
That sounds interesting, is the plan for you to host a full dev system on the router?
The reason I asked the question above is to work out if the P2 has in fact had a negative impact on P1 development, certainly the P1 forum seems to have 'slowed' since the P2 forum was started.
Has it stifled innovation and development because people could instantly see they could do more with P2?
I've been using P1 for 7 years now and I know I have only just scratched the surface with it, there's so much more we can do with it, I am sure of that at least ;-)
Oh yes, a full dev system. That is: propeller-load, openspin and vim
It's all working here on my d-link.
But you raise an interesting point. You see, being a router it has a web server for it's web configuration interface. We happen to have a working Spin IDE running in the browser. Syntax highlighting editor and all. That web IDE sends binaries back to the server for loading into a Propeller. All we need to do is arrange for the router to serve up that Spin web IDE.
Sadly my particular router does not have enough file system space for that.
ALso working on the router like heater and loopy - see the thread in "General" "WiFi & IOT..."
http://forums.parallax.com/showthread.php/156414-WiFi-amp-IOT-for-home-controllers-monitors-using-WR703N-20-routers-and-xx-WRT
1) Cost is a lot to do with it.
I have two old d-links here, one is the WIFI router for the house. The other was spare in the junk pile. Now doing service as a giant Prop Plug.
A Raspberry Pi will cost be 40 to 5 euros depending on where I pick one up around here. The WIFI dongle a further 10 euro or so.
Cluso and Loopy are getting tiny little routers for 20 dollars it seems.
2) Size. These little routers are smaller than a Pi. The board inside is even smaller still or course.
3) To easy:) I already have propeller-load, openspin and even SimpleIDE running on the Pi. One thing that did not work reliably my on the Pi was in fact my wifi dongles. I know guys who have the same hardware and it works well for them. It's just a local problem to me.
4) The routers have that nice web interface that comes with OpenWRT. All that nice easy fire wall set up and such.
5) I never used a MIPS processor before!
Mostly for me though it's because Loopy and Cluso were try this and Loopy cajoled me into helping out. I do have a bunch of Raspi's here so I will for sure be dedicating one of my old 256MB versions to OpenWRT. Just because.
A faster P1 with more cogs, more ram and a good few extras
I can see it being a P1 FPGA edition maybe, but what I can't understand there, is why would Ken take Chip off the P2 dev to do this P1 thing so close to P2's completion, unless it was something as quick as his P1 (as it is) FPGA image.