I doubt that a GCC-based toolchain will ever compile on the P2 target platform though. :-)
I would never say never - a more relevant question is 'How fast will P2 build a Gcc-level tool chain'.
Those Rasp Pi numbers show other small target platforms can compile GCC-level tools.
What will the times be down to, when P2 ships ?
If P2 can never get close, then 'self hosting' is more illusion than practical delivery.
I doubt that a GCC-based toolchain will ever compile on the P2 target platform though. :-)
I would never say never - a more relevant question is 'How fast will P2 build a Gcc-level tool chain'.
Those Rasp Pi numbers show other small target platforms can compile GCC-level tools.
What will their times be down to, when P2 ships ?
If P2 can never get close, then 'self hosting' is more illusion than practical delivery.
I doubt that a GCC-based toolchain will ever compile on the P2 target platform though. :-)
I would never say never - a more relevant question is 'How fast will P2 build a Gcc-level tool chain'.
Those Rasp Pi numbers show other small target platforms can compile GCC-level tools.
What will the times be down to, when P2 ships ?
If P2 can never get close, then 'self hosting' is more illusion than practical delivery.
True, you can compile GCC on P2. You just have to port an OS that is supported by GCC. Have fun! :-)
Also, I'm pretty sure that when Chip mentions self-hosting, he is talking about Spin not C.
We have self-hosted Spin+PASM on the Prop 1 for years (Sphinx, Spinix). And we have a number of other self-hosted languages like FemtoBasic and a few Forths. And I'm pretty sure a simple C compiler will also be possible (something like Small-C).
What you always need for a self hosted P2 toolchain is a Keyboard, a Monitor and mass storage - an SD card for example. And here is my problem:As long as we only have PS/2 keyboard and VGA support, I don't see how this is more future proof than a Windows or Linux computer with a toolchain. A Keyboard and a VGA-monitor is also not easier to connect or smaller than a little Notebook or Tablet.
So we would need USB host for the Keyboard and HDMI for the Monitor, but how long until that get's replaced by new standards?
Andy
IMHO, VGA and other analog video, like component, will be useful for a very long time yet.
Personally, serial operation is a must. Prople will have a device of some sort and doing a basic terminal isn't hard. That will see use, like the monitor did, and will.
But, I agree on keyboard. Seems to me a serial, or just clever design that can be hooked into, can be setup for whatever people end up using.
Maybe a USB module gets done. Then, a program image with that gets loaded.
If the self hosting tools can just take streams and or blocks, that is enough to build on where needed.
Even when using another device for serial, one still enjoys the benefit of a consistent environment.
Point of sale and other business and industrial solutions lag very significantly behind. For 10 years at least, getting a simple keyboard and display won't be an issue.
That is long enough for other options to get sorted out.
HDMI is a mess. and it is a moving target like VGA was. Baseline options are there and will continue to be there. Adapters will be there too, if nothing else, just due to not everyone moving or wanting to move so fast.
It is not hard to use very old video today. There are some PITA niche cases where standards were abused, but the vast majority work just fine on our current displays, or and adapter for those.
For cases where that movement is important, the tools that do not run on the thing will be there too.
We have self-hosted Spin+PASM on the Prop 1 for years (Sphinx, Spinix). And we have a number of other self-hosted languages like FemtoBasic and a few Forths. And I'm pretty sure a simple C compiler will also be possible (something like Small-C).
What you always need for a self hosted P2 toolchain is a Keyboard, a Monitor and mass storage - an SD card for example. And here is my problem:As long as we only have PS/2 keyboard and VGA support, I don't see how this is more future proof than a Windows or Linux computer with a toolchain. A Keyboard and a VGA-monitor is also not easier to connect or smaller than a little Notebook or Tablet.
So we would need USB host for the Keyboard and HDMI for the Monitor, but how long until that get's replaced by new standards?
Andy
I've tried to point this out as well. How many people actually use these self-hosted tools? I know some people use the various Forth systems but what about Spinix or Sphinx? Does anyone use those?
P1 is very small. FORTH works very well, despite that. Kudos to Peter.
But the small size really limits things.
P2 is big enough to make an all P2 system plausible.
That is the biggest difference I see.
Besides, the way I see it, Chip works on a low level for his tools. If these are done in SPIN+PASM, they are way more accessible, understandable than x86 and Delphi.
When that all gets done, the P2 and that set of code defines the "always works" baseline, much like a basic VGA 640x480 does, or the P1 Prop Tool does.
Everyone builds from there, or not, as they see fit, and if Chip builds, it's all on a well known base too.
Remember, Chip is working how he wants on tools he wants. That has basically zero to do with higher level or bigger things.
And the simple nature of SPIN+PASM is an artifact of Chip's process, which is inclusive if the P2 and the programming of it. Many of us want that result.
I know I do, and that is due to how effective and easy it is.
Other use cases will center on C or maybe just PASM, and will play out in the expected way, and will deliver their benefits in the expected way.
We can get that latter result any number of ways, we really won't get the unified, tight vision Chip has, unless he builds it. And it's not like Chip is going to fire up gcc and build it all there. That just isn't the design vision, nor how he works.
Since it will get built, why not have it running on the chip? It won't be big and complex, and that is by design too. That is a good thing.
Again, plenty of people want to work like he does.
The minute we get commits on the current design, C can start, and may beat SPIN 2.
Having the tools Chip would make does not impact anything really.
P1 is very small. FORTH works very well, despite that. Kudos to Peter.
But the small size really limits things.
P2 is big enough to make an all P2 system plausible.
That is the biggest difference I see.
Besides, the way I see it, Chip works on a low level for his tools. If these are done in SPIN+PASM, they are way more accessible, understandable than x86 and Delphi.
When that all gets done, the P2 and that set of code defines the "always works" baseline, much like a basic VGA 640x480 does, or the P1 Prop Tool does.
Everyone builds from there, or not, as they see fit, and if Chip builds, it's all on a well known base too.
Remember, Chip is working how he wants on tools he wants. That has basically zero to do with higher level or bigger things.
And the simple nature of SPIN+PASM is an artifact of Chip's process, which is inclusive if the P2 and the programming of it. Many of us want that result.
I know I do, and that is due to how effective and easy it is.
Other use cases will center on C or maybe just PASM, and will play out in the expected way, and will deliver their benefits in the expected way.
We can get that latter result any number of ways, we really won't get the unified, tight vision Chip has, unless he builds it.
Since it will get built, why not have it running on the chip? It won't be big and complex, and that is by design too. That is a good thing.
Chip's P1 Spin/PASM compiler is written in x86 assembly. I wonder if he'll write the new P2 compiler in both x86 assembly and PASM?
I think he will get PASM complete on x86, like he did already. Last time, he started on SPIN 2, but got stalled. Was easiest to just extend the P1 code. And now it will likely be easiest to modify the existing P2 code.
Once PASM, and enough SPIN is done, I'll be curious to see if he ports or builds both, or something different.
I'm guessing he will port once the intel tools are complete enough so that it does not end up a bottleneck. Maybe do enough to be sure of whatever ends up in ROM.
I doubt it would all get ROMed. ROM is probably the same as we have seen already.
I think he will get PASM complete on x86, like he did already. Last time, he started on SPIN 2, but got stalled. Was easiest to just extend the P1 code. And now it will likely be easiest to modify the existing P2 code.
Once PASM, and enough SPIN is done, I'll be curious to see if he ports or builds both, or something different.
I'm guessing he will port once the intel tools are complete enough so that it does not end up a bottleneck. Maybe do enough to be sure of whatever ends up in ROM.
I doubt it would all get ROMed. ROM is probably the same as we have seen already.
If he were willing to use C then he could just modify, or have Roy modify, the OpenSpin compiler for the P2. He would then be able to compile it with PropGCC and run the exact same code on the PC and on the P2. Heck, he could even write Spin2 in Spin and get the same benefit if he doesn't like C. He could use spin2cpp to convert the Spin2 compiler to C++ to run on the PC and just run it native on the P2 itself.
I know I would prefer the nicely written, useful code we have seen so far to the output of gcc.
And having seen how Chip works, I also know I would prefer the SPIN PASM that results from his current process too.
There is all the time and space in the world for more standard tool sets. I, and I know I write for others, want the one Chip does on the way he does it.
A self-hosted development system on the P2 should be feasible. It's difficult to do it on the P1 because of the memory limitations and slow speed. The Sphinx compiler is impressive given that it works within the 32K of hub RAM. P2 will have much more RAM and will also run substantially faster.
However, with that being said, I would much rather use a PC for development. A PC provides for a fuller set of tools than a self-hosted P2 system would. From a PC one has access to the OBEX and it can run the Prop tool, BST, PropGCC and SimpleIDE. I don't see much advantage in trying to recreate that on a P2 platform. Then again, it might be fun to try to implement such things on the P2.
I know I would prefer the nicely written, useful code we have seen so far to the output of gcc.
And having seen how Chip works, I also know I would prefer the SPIN PASM that results from his current process too.
There is all the time and space in the world for more standard tool sets. I, and I know I write for others, want the one Chip does on the way he does it.
I suppose but it seems really wasteful of time and effort to write a compiler in assembler for two different architectures. It also will make it much more difficult to maintain the two in parallel. What I hoped would happen was to have someone work in tandem with Chip in a co-development environment where a software guy (maybe Roy) writes the tools and Chip designs the hardware and specifies the Spin language specification.
I'll bet there was a way to deal with it, and as mentioned so many times before, it's not just a simple addition.
Of course there was another way to deal with it, but it took a lot of reshuffling of what code and functions went in what cogs.
It may not be "just a simple addition", but compared to what is already implemented it can't be all that complicated either. A lot of the required circuitry is already there for the waitxxx instructions.
Here is the conflict. Adding the feature will improve, or is likely to improve this use case, but it will come at the expense of other ones that define the product in the market.
Given how things are intended to be done, and the strong motivation by users to so them as they are used to doing on other products, people will use the feature way more than intended, essentially marginalizing the real differentiator.
Worse, the limited feature and the intent of the Prop won't be aligned with expectations, which will reduce value perception and increase cost of doing it as intended perception.
While having it would likely have made your scenario easier, or likely would have, the minor investment you did make expands your ability to use the product as intended and maximize the benefits that go along with that too.
Props work differently. If they don't, there really isn't a reason for them to exist.
Props are easy most of the time too.
That won't continue to be true if the feature set gets diluted by niche case add ons.
Finally, Props work differently, and how can that actually make sense when they include features intended to make them work the same as everything else?
Either the Propeller way works or it does not. Which is it?
The propeller is perhaps the best chip for real time deterministic applications to date, so I find it quite puzzling that some of it's users are so opposed to one of the two best event handling methods devised so far.
The only real difference I can see between a waitxxx and an event task switch is that a task switch allows non time critical code to run until the event occurs, while the waitxxx stops executing code while it is waiting for the event. The event code would be just as deterministic as the waitxxx code.
Implementing a single hardware event switch in each cog would require minimal silicon for the functionality added, so to me this opposition seems to be based more on ideology than any cost/benefit rationale.
There is zero need to make everything comply in that way.
Comply in what way? I guess everyone here worships Chip and doesn't think anyone else could possibly have a good idea. While I agree that opening up the design of anything to a group of hundreds of forum users is likely to result in chaos, I do think that two or three people working together can come up with a consistent and elegant product. I guess it doesn't matter though. There is no way that Parallax will ever adopt that kind of approach. You have no need to worry.
I suppose but it seems really wasteful of time and effort to write a compiler in assembler for two different architectures. It also will make it much more difficult to maintain the two in parallel. What I hoped would happen was to have someone work in tandem with Chip in a co-development environment where a software guy (maybe Roy) writes the tools and Chip designs the hardware and specifies the Spin language specification.
Agreed, and I think it will work this way because the P2 opcodes will be defined a long time before the P2 silicon sign off.
That means Chip needs to focus on the P2 verilog design sign-off critical path, and cannot be distracted "writing tools"
P2 opcodes will likely be defined before the first (Alpha) P2 FPGA images, because the smart pins coding needs to come after the Core opcodes.
The not dark age tools will be off and running like last time.
And that is a good thing.
What then is the harm in seeing Chip produce the kinds of things he does?
What, afraid to compete with it?
I would normally not write that, but I very seriously question the better ideas comments.
Those better ideas run on and around just about anything we can name. It's not like anything is exclusive is it?
The stuff you want David, can and will get made. Let's also insure Chip's stuff gets made.
That is what the self hosting project is.
I'm just thinking that we could all get P2 + the tools needed to use it faster if the work was spread out a bit rather than having a single bottleneck in the development process.
What then is the harm in seeing Chip produce the kinds of things he does?
What, afraid to compete with it?
Only if you ignore completely the P2 Silicon time line = no harm at all.
However, I am sure Ken will make sure Chip is fully Verilog focused until the P2 finally moves to FAB - after that, Chip can be allowed to play in whatever tools sandbox he wants to.
Comments
I doubt that a GCC-based toolchain will ever compile on the P2 target platform though. :-)
I would never say never - a more relevant question is 'How fast will P2 build a Gcc-level tool chain'.
Those Rasp Pi numbers show other small target platforms can compile GCC-level tools.
What will the times be down to, when P2 ships ?
If P2 can never get close, then 'self hosting' is more illusion than practical delivery.
I doubt that a GCC-based toolchain will ever compile on the P2 target platform though. :-)
I would never say never - a more relevant question is 'How fast will P2 build a Gcc-level tool chain'.
Those Rasp Pi numbers show other small target platforms can compile GCC-level tools.
What will their times be down to, when P2 ships ?
If P2 can never get close, then 'self hosting' is more illusion than practical delivery.
I doubt that a GCC-based toolchain will ever compile on the P2 target platform though. :-)
I would never say never - a more relevant question is 'How fast will P2 build a Gcc-level tool chain'.
Those Rasp Pi numbers show other small target platforms can compile GCC-level tools.
What will the times be down to, when P2 ships ?
If P2 can never get close, then 'self hosting' is more illusion than practical delivery.
True, you can compile GCC on P2. You just have to port an OS that is supported by GCC. Have fun! :-)
Also, I'm pretty sure that when Chip mentions self-hosting, he is talking about Spin not C.
Also, I'm pretty sure that when Chip mentions self-hosting, he is talking about Spin not C.
I agree, but C-level tools self hosting will be the very next question users ask.
Also, I'm pretty sure that when Chip mentions self-hosting, he is talking about Spin not C.
I agree, but C-level tools self hosting will be the very next question users ask.
Unless you get Linux running on P2, I think that is very unlikely and also probably a pretty useless effort.
No need to define C as a goal for self hosting at all.
Put the Prop on a board intended fo use on a Pi and there it all is.
No need to define C as a goal for self hosting at all.
Put the Prop on a board intended fo use on a Pi and there it all is.
Yes, that sounds like a far better plan than trying to self-host GCC.
What you always need for a self hosted P2 toolchain is a Keyboard, a Monitor and mass storage - an SD card for example. And here is my problem:As long as we only have PS/2 keyboard and VGA support, I don't see how this is more future proof than a Windows or Linux computer with a toolchain. A Keyboard and a VGA-monitor is also not easier to connect or smaller than a little Notebook or Tablet.
So we would need USB host for the Keyboard and HDMI for the Monitor, but how long until that get's replaced by new standards?
Andy
But then you need anyway a computer, which also can run a full toolchain...
IMHO, VGA and other analog video, like component, will be useful for a very long time yet.
Personally, serial operation is a must. Prople will have a device of some sort and doing a basic terminal isn't hard. That will see use, like the monitor did, and will.
But, I agree on keyboard. Seems to me a serial, or just clever design that can be hooked into, can be setup for whatever people end up using.
Maybe a USB module gets done. Then, a program image with that gets loaded.
If the self hosting tools can just take streams and or blocks, that is enough to build on where needed.
Even when using another device for serial, one still enjoys the benefit of a consistent environment.
Point of sale and other business and industrial solutions lag very significantly behind. For 10 years at least, getting a simple keyboard and display won't be an issue.
That is long enough for other options to get sorted out.
HDMI is a mess. and it is a moving target like VGA was. Baseline options are there and will continue to be there. Adapters will be there too, if nothing else, just due to not everyone moving or wanting to move so fast.
It is not hard to use very old video today. There are some PITA niche cases where standards were abused, but the vast majority work just fine on our current displays, or and adapter for those.
For cases where that movement is important, the tools that do not run on the thing will be there too.
What you always need for a self hosted P2 toolchain is a Keyboard, a Monitor and mass storage - an SD card for example. And here is my problem:As long as we only have PS/2 keyboard and VGA support, I don't see how this is more future proof than a Windows or Linux computer with a toolchain. A Keyboard and a VGA-monitor is also not easier to connect or smaller than a little Notebook or Tablet.
So we would need USB host for the Keyboard and HDMI for the Monitor, but how long until that get's replaced by new standards?
Andy
I've tried to point this out as well. How many people actually use these self-hosted tools? I know some people use the various Forth systems but what about Spinix or Sphinx? Does anyone use those?
But the small size really limits things.
P2 is big enough to make an all P2 system plausible.
That is the biggest difference I see.
Besides, the way I see it, Chip works on a low level for his tools. If these are done in SPIN+PASM, they are way more accessible, understandable than x86 and Delphi.
When that all gets done, the P2 and that set of code defines the "always works" baseline, much like a basic VGA 640x480 does, or the P1 Prop Tool does.
Everyone builds from there, or not, as they see fit, and if Chip builds, it's all on a well known base too.
Remember, Chip is working how he wants on tools he wants. That has basically zero to do with higher level or bigger things.
And the simple nature of SPIN+PASM is an artifact of Chip's process, which is inclusive if the P2 and the programming of it. Many of us want that result.
I know I do, and that is due to how effective and easy it is.
Other use cases will center on C or maybe just PASM, and will play out in the expected way, and will deliver their benefits in the expected way.
We can get that latter result any number of ways, we really won't get the unified, tight vision Chip has, unless he builds it. And it's not like Chip is going to fire up gcc and build it all there. That just isn't the design vision, nor how he works.
Since it will get built, why not have it running on the chip? It won't be big and complex, and that is by design too. That is a good thing.
Again, plenty of people want to work like he does.
The minute we get commits on the current design, C can start, and may beat SPIN 2.
Having the tools Chip would make does not impact anything really.
But the small size really limits things.
P2 is big enough to make an all P2 system plausible.
That is the biggest difference I see.
Besides, the way I see it, Chip works on a low level for his tools. If these are done in SPIN+PASM, they are way more accessible, understandable than x86 and Delphi.
When that all gets done, the P2 and that set of code defines the "always works" baseline, much like a basic VGA 640x480 does, or the P1 Prop Tool does.
Everyone builds from there, or not, as they see fit, and if Chip builds, it's all on a well known base too.
Remember, Chip is working how he wants on tools he wants. That has basically zero to do with higher level or bigger things.
And the simple nature of SPIN+PASM is an artifact of Chip's process, which is inclusive if the P2 and the programming of it. Many of us want that result.
I know I do, and that is due to how effective and easy it is.
Other use cases will center on C or maybe just PASM, and will play out in the expected way, and will deliver their benefits in the expected way.
We can get that latter result any number of ways, we really won't get the unified, tight vision Chip has, unless he builds it.
Since it will get built, why not have it running on the chip? It won't be big and complex, and that is by design too. That is a good thing.
Chip's P1 Spin/PASM compiler is written in x86 assembly. I wonder if he'll write the new P2 compiler in both x86 assembly and PASM?
Once PASM, and enough SPIN is done, I'll be curious to see if he ports or builds both, or something different.
I'm guessing he will port once the intel tools are complete enough so that it does not end up a bottleneck. Maybe do enough to be sure of whatever ends up in ROM.
I doubt it would all get ROMed. ROM is probably the same as we have seen already.
Once PASM, and enough SPIN is done, I'll be curious to see if he ports or builds both, or something different.
I'm guessing he will port once the intel tools are complete enough so that it does not end up a bottleneck. Maybe do enough to be sure of whatever ends up in ROM.
I doubt it would all get ROMed. ROM is probably the same as we have seen already.
If he were willing to use C then he could just modify, or have Roy modify, the OpenSpin compiler for the P2. He would then be able to compile it with PropGCC and run the exact same code on the PC and on the P2. Heck, he could even write Spin2 in Spin and get the same benefit if he doesn't like C. He could use spin2cpp to convert the Spin2 compiler to C++ to run on the PC and just run it native on the P2 itself.
And having seen how Chip works, I also know I would prefer the SPIN PASM that results from his current process too.
There is all the time and space in the world for more standard tool sets. I, and I know I write for others, want the one Chip does on the way he does it.
However, with that being said, I would much rather use a PC for development. A PC provides for a fuller set of tools than a self-hosted P2 system would. From a PC one has access to the OBEX and it can run the Prop tool, BST, PropGCC and SimpleIDE. I don't see much advantage in trying to recreate that on a P2 platform. Then again, it might be fun to try to implement such things on the P2.
And having seen how Chip works, I also know I would prefer the SPIN PASM that results from his current process too.
There is all the time and space in the world for more standard tool sets. I, and I know I write for others, want the one Chip does on the way he does it.
I suppose but it seems really wasteful of time and effort to write a compiler in assembler for two different architectures. It also will make it much more difficult to maintain the two in parallel. What I hoped would happen was to have someone work in tandem with Chip in a co-development environment where a software guy (maybe Roy) writes the tools and Chip designs the hardware and specifies the Spin language specification.
Once a version gets done, the definition will too. At that point, it won't matter much time wise.
Chips SPIN and PASM are likely to be loose, easy, powerful, like they are on P1.
The few times input has been a part of the process, it got a lot more messy, and a lot less fun too.
Frankly, besides some bug fixes, there won't be a big need to maintain. The end result should work and look a lot like the P1 tools.
Of course, others will make variations, etc... but there won't be a big need to chase them.
Once a version gets done, the definition will too. At that point, it won't matter much time wise.
Chips SPIN and PASM are likely to be loose, easy, powerful, like they are on P1.
The few times input has been a part of the process, it got a lot more messy, and a lot less fun too.
Frankly, besides some bug fixes, there won't be a big need to maintain. The end result should work and look a lot like the P1 tools.
Of course, others will make variations, etc... but there won't be a big need to chase them.
Oh well, we stay in the dark ages then I guess...
There is zero need to make everything comply in that way.
Of course there was another way to deal with it, but it took a lot of reshuffling of what code and functions went in what cogs.
It may not be "just a simple addition", but compared to what is already implemented it can't be all that complicated either. A lot of the required circuitry is already there for the waitxxx instructions.
Here is the conflict. Adding the feature will improve, or is likely to improve this use case, but it will come at the expense of other ones that define the product in the market.
Given how things are intended to be done, and the strong motivation by users to so them as they are used to doing on other products, people will use the feature way more than intended, essentially marginalizing the real differentiator.
Worse, the limited feature and the intent of the Prop won't be aligned with expectations, which will reduce value perception and increase cost of doing it as intended perception.
While having it would likely have made your scenario easier, or likely would have, the minor investment you did make expands your ability to use the product as intended and maximize the benefits that go along with that too.
Props work differently. If they don't, there really isn't a reason for them to exist.
Props are easy most of the time too.
That won't continue to be true if the feature set gets diluted by niche case add ons.
Finally, Props work differently, and how can that actually make sense when they include features intended to make them work the same as everything else?
Either the Propeller way works or it does not. Which is it?
The propeller is perhaps the best chip for real time deterministic applications to date, so I find it quite puzzling that some of it's users are so opposed to one of the two best event handling methods devised so far.
The only real difference I can see between a waitxxx and an event task switch is that a task switch allows non time critical code to run until the event occurs, while the waitxxx stops executing code while it is waiting for the event. The event code would be just as deterministic as the waitxxx code.
Implementing a single hardware event switch in each cog would require minimal silicon for the functionality added, so to me this opposition seems to be based more on ideology than any cost/benefit rationale.
There is zero need to make everything comply in that way.
Comply in what way? I guess everyone here worships Chip and doesn't think anyone else could possibly have a good idea. While I agree that opening up the design of anything to a group of hundreds of forum users is likely to result in chaos, I do think that two or three people working together can come up with a consistent and elegant product. I guess it doesn't matter though. There is no way that Parallax will ever adopt that kind of approach. You have no need to worry.
I suppose but it seems really wasteful of time and effort to write a compiler in assembler for two different architectures. It also will make it much more difficult to maintain the two in parallel. What I hoped would happen was to have someone work in tandem with Chip in a co-development environment where a software guy (maybe Roy) writes the tools and Chip designs the hardware and specifies the Spin language specification.
Agreed, and I think it will work this way because the P2 opcodes will be defined a long time before the P2 silicon sign off.
That means Chip needs to focus on the P2 verilog design sign-off critical path, and cannot be distracted "writing tools"
P2 opcodes will likely be defined before the first (Alpha) P2 FPGA images, because the smart pins coding needs to come after the Core opcodes.
The not dark age tools will be off and running like last time.
And that is a good thing.
What then is the harm in seeing Chip produce the kinds of things he does?
What, afraid to compete with it?
I would normally not write that, but I very seriously question the better ideas comments.
Those better ideas run on and around just about anything we can name. It's not like anything is exclusive is it?
The stuff you want David, can and will get made. Let's also insure Chip's stuff gets made.
That is what the self hosting project is.
The not dark age tools will be off and running like last time.
And that is a good thing.
What then is the harm in seeing Chip produce the kinds of things he does?
What, afraid to compete with it?
I would normally not write that, but I very seriously question the better ideas comments.
Those better ideas run on and around just about anything we can name. It's not like anything is exclusive is it?
The stuff you want David, can and will get made. Let's also insure Chip's stuff gets made.
That is what the self hosting project is.
I'm just thinking that we could all get P2 + the tools needed to use it faster if the work was spread out a bit rather than having a single bottleneck in the development process.
What, afraid to compete with it?
Only if you ignore completely the P2 Silicon time line = no harm at all.
However, I am sure Ken will make sure Chip is fully Verilog focused until the P2 finally moves to FAB - after that, Chip can be allowed to play in whatever tools sandbox he wants to.