There is such a thing as portable Forth, but the Forth interpreter must adhere to standards to achieve it. However, even with standard Forth the programmer has to keep track of where things are on the stack. I'd rather have a compiler to do that menial task for me.
I hate keeping track of the stack myself although there are too many other advantages that far outweigh that minor inconvenience. There are very few times that I need to really juggle the stack anyway plus I avoid any of those ugly ANSI pick and roll words as they are more like kludge tools IMO. But then too I have four stacks and not two which just makes good sense as well.
I disagree though in this use of black or white labels, to say something is portable and something else is not. I can make Tachyon as portable as I like but like most embedded Forths, they are usually optimized for the processor. Eve when a C compiler is wrangled to support a target like the Propeller you can't guarantee all code is portable to and from the Prop.
Anyhow, a C compiler is always PC based and no matter the merits of the "language" it will always end up as a static binary blob devoid of all language once it has been compiled. I the meantime I can program in Tachyon over Telnet straight on the Prop itself and test various functions while it is running much to the chagrin of any of the much touted glorious PC compiled languages.
Isn't there a saying that goes "the proof of the pudding".......
The investment to make general things portable pays off. Library code, snippets, sorts, etc...
The more specialized it is, the more likely portability make make less sense. It might be done, and never see much other use. But, it still might makes sense to be portable.
For something like Peter is doing, interactive, extensible, run time modifiable, etc... seem like really high value capability. As a hardware platform, it's unique enough to marginalize portability as a primary priority.
Even then, it sure seems to me, once a new kernel and the core assembly language (if needed on a new hardware platform) words are done, load a dictionary and the rest is going to mostly be there. Unlike a compiled thing, Forth is something where the language is the code, but for high speed, core things containing assembly language.
That is a level of power similar to what a HAL might do in a more sophisticated development environment. Forth seems very potent in this respect.
An intrepeter can do similar things, but is typically much larger and slower than Forth appears to be too.
Anyhow, a C compiler is always PC based and no matter the merits of the "language" it will always end up as a static binary blob devoid of all language once it has been compiled. I the meantime I can program in Tachyon over Telnet straight on the Prop itself and test various functions while it is running much to the chagrin of any of the much touted glorious PC compiled languages.
That will most likely change on the P2. I envision a C compiler that will run directly on the P2, and there will be an OS running on the P2 as well.
Anyhow, a C compiler is always PC based and no matter the merits of the "language" it will always end up as a static binary blob devoid of all language once it has been compiled. I the meantime I can program in Tachyon over Telnet straight on the Prop itself and test various functions while it is running much to the chagrin of any of the much touted glorious PC compiled languages.
That will most likely change on the P2. I envision a C compiler that will run directly on the P2, and there will be an OS running on the P2 as well.
Many things will likely change! I'll bet heater will be working on a JavaScript interpreter. I may look into porting micropython if no one else does.
.Not sure we need a OS as such. But who is going to write that self hosted C compiler?.
An OS is probably not needed, but a self-hosted C compiler would need a way to load and execute programs stored on an SD card. As far as who is going to write the self-hosted compiler, just wait a few more days and my grand plan will be revealed.
A complete C implementation will take a while, but a compiler that can handle a C subset should be very doable. Of course there will be an editor. I already have vi running on P1.
As for an editor, there may be one in the ROM. We had the basics done on hot. Doing a nice text display is no big deal for P2. Could even do spiffy scrolling, etc.
Maybe just edit, assemble in ROM, load other goodies into RAM from SD. If we go nuts, bootable SD.
An OS is probably not needed, but a self-hosted C compiler would need a way to load and execute programs stored on an SD card. As far as who is going to write the self-hosted compiler, just wait a few more days and my grand plan will be revealed.
Interesting.
I don't quite grasp the productivity trade offs of self hosting.
I can see the appeal from a 'look what I can do' angle, but a compiler alone is not true development, and any constrained resource MCU will never open a PDF file, for example, or run a productive editor.
So you always need another development system for real work.
If someone is serious about self hosting on P2, then this work seems worth tracking :
An OS is probably not needed, but a self-hosted C compiler would need a way to load and execute programs stored on an SD card. As far as who is going to write the self-hosted compiler, just wait a few more days and my grand plan will be revealed.
Interesting.
I don't quite grasp the productivity trade offs of self hosting.
I can see the appeal from a 'look what I can do' angle, but a compiler alone is not true development, and any constrained resource MCU will never open a PDF file, for example, or run a productive editor.
So you always need another development system for real work.
If someone is serious about self hosting on P2, then this work seems worth tracking :
An OS is probably not needed, but a self-hosted C compiler would need a way to load and execute programs stored on an SD card. As far as who is going to write the self-hosted compiler, just wait a few more days and my grand plan will be revealed.
Interesting.
I don't quite grasp the productivity trade offs of self hosting.
I can see the appeal from a 'look what I can do' angle, but a compiler alone is not true development, and any constrained resource MCU will never open a PDF file, for example, or run a productive editor.
So you always need another development system for real work.
If someone is serious about self hosting on P2, then this work seems worth tracking :
It could port to a 1-2-3 board, and then have the code generator patched to P2.
I don't really see much value in self-hosting of compiler toolchains. However, I can see the value in what Tachyon provides. You can write your code on a standard PC and then debug it using the full power of the implementation language on the target board itself. An interactive language like Tachyon gives you an easy way to incrementally test your code and explore the hardware that a compile-link-download-execute cycle can never provide. The same can be true for other languages like Javascript, Python, BASIC, Lisp, etc. An interactive language has many advantages that are not shared by batch languages like C/C++ or Spin.
Actually, Peter has articulated some of them right here.
I disagree with you on "productive editor" and that varies a lot among people. A skilled vi user can be extremely productive, for example. This is all about one's preferences and skills.
I'm not sure about Oberon as "serious" self-hosting. Really interesting to think about though.
As for PDF's and other goodies, yeah and no, right? A lot of that depends on the nature of the development being done, and how one uses reference materials. "Another system may just be information in nature, so that could be an iPad, or tablet where it's undesirable to load or use tools, but just fine for information access."
There is also the P2 to program P2 case. Say it's self-hosted, but the do it all on the target system is not the intended use case. A development system could be a secondary P2. Frankly, once we've got the real chips, and we've got more tools and drivers than we do now, a P2 is perfectly capable of displaying lots of useful stuff. Somebody could actually do a lot on one that's intended to produce executables for another one, and or perform comms with the same.
For custom systems, or those that may be pressed into service for an extended length of time, it's kind of nice to not worry so much about having to maintain the ability to execute a given tool chain. VM packages, licenses, compatibility, etc... can be replaced with some initial documentation. As an example of this, a while back I was working with some of the people having to manage the very difficult problem of nuclear waste. They build big places, inspect them, then once they start using them, no human will ever again enter those places for thousands of years. The software, controls, and a whole lot of stuff they use will outlive most of what we are using today. They approach things really differently. I would not want to be them.
I won't argue it's a superior choice, just a valid one.
I think we have been around all these compiled vs interactive debates a dozen times before here.
A spin compiler on a modern PC can pretty much compile and download code to a Prop about as fast as I care to type and test it. So the interactive nature of a programming system does not offer much benefit.
On the other hand... having a self hosting system means you only need to give it text and you can reprogram it. No installing compilers, IDE, USB drivers on your PC and other non-sense. No having to have to do all that for every possible machine you may find yourself in front of.
Given that every machine you use from now till as far ahead as anyone can see will have a web browser, I suggest that having the editor in the browser is the way to go. Put the compiler in the browser as well if possible, all served up from the target device.
A spin compiler on a modern PC can pretty much compile and download code to a Prop about as fast as I care to type and test it. So the interactive nature of a programming system does not offer much benefit.
Compilation speed is not the issue. With a true interactive language you can type program snippets or just expressions and have them evaluated immediately. You don't have to write a whole program. Also, your execution state remains in effect and can be inspected in modified interactively. That will never happen with a Spin program. Spin is a batch language.
On the other hand... having a self hosting system means you only need to give it text and you can reprogram it. No installing compilers, IDE, USB drivers on your PC and other non-sense. No having to have to do all that for every possible machine you may find yourself in front of.
Given that every machine you use from now till as far ahead as anyone can see will have a web browser, I suggest that having the editor in the browser is the way to go. Put the compiler in the browser as well if possible, all served up from the target device.
Only now you have redefined 'self hosting' into something else again.
Web Browser Centric has some benefits, (speed is not one of them) but self-hosting it is not.
You still need to install compilers/ide, and that code has to reside, and be updated, somewhere.
Then you find not all browsers are the same...
Even Gmail drive me nuts, on a decent PC.
My Wifi is far less reliable than USB, and both are worse on Laptops/tablets than on a PC.
I hate browser tools. They work, until they don't and when they don't, the hassles are not worth using a browser. Over the years, I have accumulated a disgusting number of hacks, old version installers, and a pile of other things all propping up browser tools of some sort.
I would say a browser based tool sounds like a great idea, in practice they are maddening slow. Yes it sounds like a great thing as being cross platform and all, but like I will say again SLOW. Browsers are just doing too much and the rendering just bogs it down. After a few minutes you want to throw the keyboard through the monitor.
1) Speed is not an issue. The OpenSource Spin compiler when transpiled to Javascript runs pretty much as fast in the browser as it does when natively compiled from C++.
2) It's "self hosting" in the sense that the target system has it's own web server that serves up all the required HTML/JS to the browser.
3) The connection reliability depends how you do it. WIFI to my various devices around the house just works. If you want a cable there is always ethernet. (Or even PPP over a serial line)
4) There are no compilers/IDE to install. It's just a web page served up by the target.
5) Browser differences are a pain. That pain is avoidable. Most web sites work just as well on Chrome, Firefox, Safari. Microsoft is making huge efforts to be compatible with those guys with the new Edge browser. All the stuff we make around here works as well on all the major browsers and we don't make any special effort to make that so.
6) If this requires any plug-ins, "hacks, old version installers, and a pile of other things" to prop up the browser then it has been done wrongly.
msrobots and I managed to get what I describe above working for the Propeller a year or so ago. He was programming his Props over some Wiznet or whatever WIFI module connected to a Prop.
It was rough and ready but it worked. Neither of us has had time to polish it.
Another approach to this is to have the compiler run on the target itself, the real self hosting. The target would server up the IDE, and perhaps a terminal and other tools, and away you go.
One thing witch hooked me to the P1 was @mpark(?) sphinx.
A complete self hosted system, able to compile and run spin programs.
Actually Spin and PASM.
I was feeling young again, like at the time I owned my first computer.
The P2 will offer enough resources to provide very nice self hosting systems.
As Peter is showing us already on the P2V with Tachyon.
Sadly I can not really wrap my head around forth. I tried but failed. I followed Peter at the beginning of Tachyon (Version 1), loved to read his code and (at that time) even understood how the kernel worked.
But I could not get along with them HP calculators either. That RPN thing just does not click with me.
And obviously Forth seems to be portable, if done by the same person. Peter can use a lot of Tachyon P1 stuff on Tachyon P2V and surely also on the P2 in silicon.
The main positive thing on self hosting is the fast response to your changes. Forth and Basic are good there, JavaScript maybe also, but - maybe - too complex.
Sphinx from @mpark had no interactive command line interpreter like Basic or Forth have. It was edit - compile - run cycles. But quite usable and fun.
The point where Peter is WAY ahead of all the other solutions of programming the Propeller (1 or 2) that in his commercial solutions he is selling for his living (I guess here) the software can be fine tuned interactively when installed through the nature of forth.
Very intriguing. I may need to move to Australia to get used to the forth language. To me it is way wrong and backwards. I can read and understand some of it, but I do not think I could program like that. Too much COBOL in my life.
Honestly, a browser has it's merits. You just won't catch me using one for most things. I will use things like gmail, web, maybe some light utility served up in a browser.
But the truth is, standards and browsers never really did settle down. There is always something.
Now, maybe JS running with the browser as just a window could provide something nice. I suspect it would. But how long would it really last?
One nice feature of a set of self-hosted tools, is they can be static. This is one pretty great thing about P1 SPIN. It's mostly static. People have made some tweaks to add conditional builds, and that @@@ thing, but it's really unchanged. I like this, because once one has their skills all mapped over, and the basics learned, it's just always there. And it's gonna continue to be there too.
So long as the package is complete, few bugs, capable, that's very, very attractive. Not to say better tools aren't. They clearly are, and keeping those updated, relevant, etc... makes a ton of sense. C on the Propeller is a nice example of that, and it's darn capable now too.
To me, having this kind of option is important. I may personally not always use it, but having it means I've got a baseline, "works no matter what" scenario, and over my career, I have found those to be extremely handy and productive. My core set of UNIX skills are like that, sans this systemd mess I suppose I'll get over one day. But that's kind of a rare thing in UNIX land, and it will pass, or get sorted, whatever. One of the reasons I keep an old 8 bit machine around is that simple, static nature. I can turn on the Apple 2, drop right into the built in assembler, knock something out, run it, be done, move on, using understanding I've had so long I don't even remember when I got it. It's just there.
On P1, I can be away from it for a long time, and in a day, get back to a place where I can put it to use, get something done, and move on.
When I come back to browser things, my browser needs an update, or something or other needs to be fetched, or I need a different one, or some security thing or other is a complete bother... Hate it. Unless I use it every single day, and whoever made the thing understands the value of features and the cost of relearning / adjusting. Then it's much less of a worry. Big changes that don't really do much, maybe follow a trend or other, cost me time and learning that I could be applying to what it is I want to do instead.
So, that's why keyboard / mouse and or serial and straight ASCII make sense as a baseline self-host target. We can always add a ton of stuff, and pile it onto an SD card. And we should too.
But the constant, works no matter what should be simple, lean, capable, IMHO.
If it were me, I would make that minimum assembler, editor, monitor, and provide hooks for things to be made in the future to be used easily and in a seamless way. PASM, with hubexe is pretty nice. It's all one might need for a lot of simple, "what does this do?" type programs.
Say you had BASIC or a FORTH. A monitor command, patched in when those get loaded, would allow an easy entry into either of those, and those could allow a drop back to the monitor, and either could use the assembler as needed.
So, if you've only got the chip, you've got enough to bootstrap anything you want. Literally, just type it in, if you want to. And do that from anything ever made that will support a simple serial connection.
That can be something dead simple, like "assemble to RAM" and let the user worry about where to put it. The monitor can write it to the terminal, for saving. That's enough to get the basics done. Someone with a simple block storage device could build something nicer in pieces, stash them there, then load it all in, etc...
I was starting down the road of exploring doing this on the P2 hot chip as a proof of concept. Didn't have an assembler on the chip, but I did very successfully, assemble some things, load them in and run them. Assemble more things, load those in, run those. Write it all out to the screen, and an application could be loaded back in just fine, using just what was on the chip. Doing things like moving from a TV display to VGA, while a program was running, worked just fine! Did it with the fractal program I did. Kill the video COG, stream in the other display code, launch the COG, and it all picked up where it left off.
Peter can do that kind of thing on his running FORTH system all the time, and with a lot less technical detail needing to line up just right.
If we had monitor, assembler, editor on the thing, similar approaches would be possible. This would give people the option of working, "in-situ" as opposed to a big build, boot, load, run, batch type approach we see as the vast majority use case today.
If we end up doing SD for example, well then boot from that, and the pre-planned hooks mean being able to load filesystem support, and other options, SPIN, DEBUG, C, whatever one wants to do. That setup could be used to develop for other chips, including P1 at some point, and it's pretty nice. Build whatever you want above LOMEM or some other basic scheme, and it's all gonna happen pretty easy. Build it in pieces, if you want. Write them all out to storage, and either load them in, or link 'em, or load dynamically, whatever makes sense.
For really big things, people will want to batch it. It's hard to use the chip, do that, build image, boot, test, reboot, load tools, etc... A more powerful, external system is clearly the way to go.
But, for a lot of smaller things, or learning? Doing it on the device, live, while you've got tools available, and things are running can be really educational and damn cool. An example might be having a capture / display type program monitoring some pins, showing activity of various kinds. Hook that sensor, or whatever it is right up, see the activity, then write a little code, run it on a free COG, and see the activity both ways, all with little more than some software and time.
In simple terms, that's precisely how the Apple 2 was done years ago. Those things got used for a lot of cross development, because they were open and designed to be used in these kinds of ways. They ship with editor, BASIC, monitor, line-assembler in the box. Once, just for grins, I did type in a DOS and write it to a disk. Took a couple hours, and then I could boot from that disk, do other things...
For something awesome at this, like a FORTH, one can literally assemble a small kernel and whatever it needs to talk to the outside world. Once that is done, send it a dictionary, and it's doing a lot of stuff in a pretty short amount of time.
That kind of thing is what "self-hosted" means to me.
You start with basic comms and build out whatever you want, or just write what you need to write and then it's there working. Whatever the basics are, they are present, robust, and pretty much the minimums needed to do anything else.
And they are optional.
One could ignore all of that, get a PC, whatever, and pretend that stuff is never there. Since it's a copy to RAM scenario, that I wish we had a "ignore writes to low 16K RAM" bit for, ignoring it is super easy. And the non-self hosted tools are either a much larger subset of the hosted tools, or are something else entirely. People will make whatever they want to make. And those may change a lot, or not much at all, depending too.
One use case for the self-hosted scenario is a sort of "bench computer" where it's got the basics on board. You don't do big stuff with it. You do use it for all sorts of little things, saving off tools you make as you go, or sharing them with others using a similar setup.
Signal generator, lo-fi scope, logic analyzer, data logger, test / measure with some simple additional circuits, calculator, debugger, visual and aural indicator / monitor, etc... all there in one nice, simple machine. I would use the Smile out of something like that when learning new parts, sensors, or in tandem with stuff I build, or things I might want to repair / hack on...
Here's one, I did a while back with someone. We had a funky IR controller. Just wanted to plot it's signal out on something to be evaluated to see the patterns and get some idea of what protocol it was using. Didn't have a lot of fancy stuff. A P1 did this job nicely enough. Was pretty easy. Or maybe one might want to see some data represented in some simple way, colors, lines, whatever. That's easy stuff too, and pretty useful at times, particularly times when one doesn't have other gear, or has to go and sort through a LOT of stuff, it's changes, updates, dependencies (browser anyone?) etc... just to plot a few dots on a screen, or make a sound.
I've used P1 a lot for this kind of thing. And frankly, the PC and all that jazz can get in the way.
The idea there is simple, consistent, powerful. If you have one, type in a few lines, and it does something, or run a little thing and it does something else. If you don't have one, take a day or two and make one.
Again, that's what the self-hosted path means to me. Maybe totally native makes sense, so go keyboard, mouse, monitor or LCD. Fine. Or maybe, serial makes the best sense. There it is, in a little window, and if you want, it's video output in another little window (which is why I like TV drivers and how they display dead simple using a capture card, or some other basic device), and it's all there, plus your forum, PDF files, etc... too.
If one is working above that, or in tandem with more complex things, it's not likely to make sense. Get a PC, tablet, whatever and go. No worries. Same goes for people using expensive, or complex tool sets they are accustomed to.
But the truth is, standards and browsers never really did settle down. There is always something.
True enough WEB standards have been evolving at quite a pace since the mid 1990's.
On the other hand nothing ever gets broken. The standards committees for HTTP, HTML, CSS, JS are constrained by the requirement for backward compatibility. They can never do something that would break the web. That HTML and JS from 1996 will still work in your browser today.
Certainly things like Java, FLASH have given a lot of trouble. None of those are WEB standards. Then there is all the junk MS put in to the browser, not WEB standards either.
MS has recently made the bold move of introducing the Edge browser that has all their proprietary Smile removed and a ton of standards compliance added. And making it the default on Win 10. There is a great video about that here if you have a minute. It's like 25 minute long public apology for all the wrongs of Internet Explorer over the years. Stunning:
Now, maybe JS running with the browser as just a window could provide something nice. I suspect it would. But how long would it really last?
Yep, that's what I'm getting at. How long would it last? See above. It's been working for twenty years so I see it working well into the future.
Having said all that. I also love the simplicity of a serial connection. If a self hosting system, in the traditional manner, can be implemented on the Prop I'm all for that too.
Both approaches are way to get what I really want which is getting rid of platform specific compilers, IDE's and other tools. I should be able to program a Prop given nothing but the Prop board and something with a screen, keyboard and mouse. Any time any place. Be it Mac, Windows, Linux or whatever else comes down the pipe. No binary installation, no rebuild of tools, no driver installation.
I'll just wait and see on the browser. Yes, the intent is right, but the stuff I have always needed has always never actually worked longer term.
If that changes? Well, maybe. It's gotta work for a while, and I really don't want to think about how to make it work either. That is the bar for me on browser tools.
Gives us an example a one of your non-working web needs.
JS in the browser has been as stable a platform for twenty years. There are plenty of natively compiled apps that have broken when the OS changed under them in that time.
The Prop Tool is a classic example. No use to the new Mac using generation. Without anyway forward without scraping it and going the cross-platform open source route with PropellerIDE and so on.
Well the tone of this thread seems very upbeat for self-hosting systems even if it is mostly for the P2. I've never bothered to fully complete the self-hosting for the P1 itself but it would be a bit of fun to make a stand-alone computer. I've done some PS/2 keyboard interfacing before so and even assuming that we just work with the simple 40x15 VGA display built into the kernel I believe I could knock up a single-chip Prop that has everything needed to self-host. So we would have:
1. VGA text display
2. SD FAT32
3. Tachyon and assembler
4. vi style editor
5. PS/2 keyboard
6. Sound + wave player or signal generator
7. SPLAT logic analyser
8. WIZnet server and email client
I/O-wise on a P1 this would consume a lot but we could share some lines. However at the worst without sharing we would need 23 altogether including I2C and serial. That would still leave a minimum of 9 I/O, not too bad. By sharing pins we could gain another 5 pins or so.
I believe it is quite possible to write a Basic in Forth as well plus if we can spare a little RAM and a cog then the display could upgraded to 80x25 OR 80x40.
This is definitely do'able so if anyone would really like to have Basic then it's simply a matter of writing one from scratch or else write it in Tachyon and gain access to all the filesystem and networking etc.
Just to be really cool about this I could make it work on the ultra-tiny +P8 module that piggby backs onto the IoT5500 WIZnet module. However I do have many other boards that are better suited which I may just try.
Comments
I disagree though in this use of black or white labels, to say something is portable and something else is not. I can make Tachyon as portable as I like but like most embedded Forths, they are usually optimized for the processor. Eve when a C compiler is wrangled to support a target like the Propeller you can't guarantee all code is portable to and from the Prop.
Anyhow, a C compiler is always PC based and no matter the merits of the "language" it will always end up as a static binary blob devoid of all language once it has been compiled. I the meantime I can program in Tachyon over Telnet straight on the Prop itself and test various functions while it is running much to the chagrin of any of the much touted glorious PC compiled languages.
Isn't there a saying that goes "the proof of the pudding".......
The investment to make general things portable pays off. Library code, snippets, sorts, etc...
The more specialized it is, the more likely portability make make less sense. It might be done, and never see much other use. But, it still might makes sense to be portable.
For something like Peter is doing, interactive, extensible, run time modifiable, etc... seem like really high value capability. As a hardware platform, it's unique enough to marginalize portability as a primary priority.
Even then, it sure seems to me, once a new kernel and the core assembly language (if needed on a new hardware platform) words are done, load a dictionary and the rest is going to mostly be there. Unlike a compiled thing, Forth is something where the language is the code, but for high speed, core things containing assembly language.
That is a level of power similar to what a HAL might do in a more sophisticated development environment. Forth seems very potent in this respect.
An intrepeter can do similar things, but is typically much larger and slower than Forth appears to be too.
Many things will likely change! I'll bet heater will be working on a JavaScript interpreter. I may look into porting micropython if no one else does.
Peter would just laugh at this idea, what with it being so huge and slow compared to a Forth engine.
Not sure we need a OS as such. But who is going to write that self hosted C compiler?
You tease...
Leor Zolman's BDSC compiler ran on 8 bit 8085 machines with 32K RAM and CP/M. So a C compiler on the PII must be doable.
I could probably get my Tiny compiler running on a PII but full up C is beyond me.
Do we get an editor to go with that?
As for an editor, there may be one in the ROM. We had the basics done on hot. Doing a nice text display is no big deal for P2. Could even do spiffy scrolling, etc.
Maybe just edit, assemble in ROM, load other goodies into RAM from SD. If we go nuts, bootable SD.
Interesting.
I don't quite grasp the productivity trade offs of self hosting.
I can see the appeal from a 'look what I can do' angle, but a compiler alone is not true development, and any constrained resource MCU will never open a PDF file, for example, or run a productive editor.
So you always need another development system for real work.
If someone is serious about self hosting on P2, then this work seems worth tracking :
https://en.wikipedia.org/wiki/Oberon_(operating_system)#Project_Oberon_.28FPGA.29
http://oberonstation.x10.mx/
It could port to a 1-2-3 board, and then have the code generator patched to P2.
While I agree with the gist of your post you should never say never or impossible on this forum. Someone may just do it.
I disagree with you on "productive editor" and that varies a lot among people. A skilled vi user can be extremely productive, for example. This is all about one's preferences and skills.
I'm not sure about Oberon as "serious" self-hosting. Really interesting to think about though.
As for PDF's and other goodies, yeah and no, right? A lot of that depends on the nature of the development being done, and how one uses reference materials. "Another system may just be information in nature, so that could be an iPad, or tablet where it's undesirable to load or use tools, but just fine for information access."
There is also the P2 to program P2 case. Say it's self-hosted, but the do it all on the target system is not the intended use case. A development system could be a secondary P2. Frankly, once we've got the real chips, and we've got more tools and drivers than we do now, a P2 is perfectly capable of displaying lots of useful stuff. Somebody could actually do a lot on one that's intended to produce executables for another one, and or perform comms with the same.
For custom systems, or those that may be pressed into service for an extended length of time, it's kind of nice to not worry so much about having to maintain the ability to execute a given tool chain. VM packages, licenses, compatibility, etc... can be replaced with some initial documentation. As an example of this, a while back I was working with some of the people having to manage the very difficult problem of nuclear waste. They build big places, inspect them, then once they start using them, no human will ever again enter those places for thousands of years. The software, controls, and a whole lot of stuff they use will outlive most of what we are using today. They approach things really differently. I would not want to be them.
I won't argue it's a superior choice, just a valid one.
@Dave: Do tell. soon :0
A spin compiler on a modern PC can pretty much compile and download code to a Prop about as fast as I care to type and test it. So the interactive nature of a programming system does not offer much benefit.
On the other hand... having a self hosting system means you only need to give it text and you can reprogram it. No installing compilers, IDE, USB drivers on your PC and other non-sense. No having to have to do all that for every possible machine you may find yourself in front of.
Given that every machine you use from now till as far ahead as anyone can see will have a web browser, I suggest that having the editor in the browser is the way to go. Put the compiler in the browser as well if possible, all served up from the target device.
Perhaps that's why I like to tinker with the Espruino.
Web Browser Centric has some benefits, (speed is not one of them) but self-hosting it is not.
You still need to install compilers/ide, and that code has to reside, and be updated, somewhere.
Then you find not all browsers are the same...
Even Gmail drive me nuts, on a decent PC.
My Wifi is far less reliable than USB, and both are worse on Laptops/tablets than on a PC.
I hate browser tools. They work, until they don't and when they don't, the hassles are not worth using a browser. Over the years, I have accumulated a disgusting number of hacks, old version installers, and a pile of other things all propping up browser tools of some sort.
AND this was on a local server.
Here is my take on browser based development:
1) Speed is not an issue. The OpenSource Spin compiler when transpiled to Javascript runs pretty much as fast in the browser as it does when natively compiled from C++.
2) It's "self hosting" in the sense that the target system has it's own web server that serves up all the required HTML/JS to the browser.
3) The connection reliability depends how you do it. WIFI to my various devices around the house just works. If you want a cable there is always ethernet. (Or even PPP over a serial line)
4) There are no compilers/IDE to install. It's just a web page served up by the target.
5) Browser differences are a pain. That pain is avoidable. Most web sites work just as well on Chrome, Firefox, Safari. Microsoft is making huge efforts to be compatible with those guys with the new Edge browser. All the stuff we make around here works as well on all the major browsers and we don't make any special effort to make that so.
6) If this requires any plug-ins, "hacks, old version installers, and a pile of other things" to prop up the browser then it has been done wrongly.
msrobots and I managed to get what I describe above working for the Propeller a year or so ago. He was programming his Props over some Wiznet or whatever WIFI module connected to a Prop.
It was rough and ready but it worked. Neither of us has had time to polish it.
Another approach to this is to have the compiler run on the target itself, the real self hosting. The target would server up the IDE, and perhaps a terminal and other tools, and away you go.
A complete self hosted system, able to compile and run spin programs.
Actually Spin and PASM.
I was feeling young again, like at the time I owned my first computer.
The P2 will offer enough resources to provide very nice self hosting systems.
As Peter is showing us already on the P2V with Tachyon.
Sadly I can not really wrap my head around forth. I tried but failed. I followed Peter at the beginning of Tachyon (Version 1), loved to read his code and (at that time) even understood how the kernel worked.
But I could not get along with them HP calculators either. That RPN thing just does not click with me.
And obviously Forth seems to be portable, if done by the same person. Peter can use a lot of Tachyon P1 stuff on Tachyon P2V and surely also on the P2 in silicon.
The main positive thing on self hosting is the fast response to your changes. Forth and Basic are good there, JavaScript maybe also, but - maybe - too complex.
Sphinx from @mpark had no interactive command line interpreter like Basic or Forth have. It was edit - compile - run cycles. But quite usable and fun.
The point where Peter is WAY ahead of all the other solutions of programming the Propeller (1 or 2) that in his commercial solutions he is selling for his living (I guess here) the software can be fine tuned interactively when installed through the nature of forth.
Very intriguing. I may need to move to Australia to get used to the forth language. To me it is way wrong and backwards. I can read and understand some of it, but I do not think I could program like that. Too much COBOL in my life.
Enjoy!
Mike
But the truth is, standards and browsers never really did settle down. There is always something.
Now, maybe JS running with the browser as just a window could provide something nice. I suspect it would. But how long would it really last?
One nice feature of a set of self-hosted tools, is they can be static. This is one pretty great thing about P1 SPIN. It's mostly static. People have made some tweaks to add conditional builds, and that @@@ thing, but it's really unchanged. I like this, because once one has their skills all mapped over, and the basics learned, it's just always there. And it's gonna continue to be there too.
So long as the package is complete, few bugs, capable, that's very, very attractive. Not to say better tools aren't. They clearly are, and keeping those updated, relevant, etc... makes a ton of sense. C on the Propeller is a nice example of that, and it's darn capable now too.
To me, having this kind of option is important. I may personally not always use it, but having it means I've got a baseline, "works no matter what" scenario, and over my career, I have found those to be extremely handy and productive. My core set of UNIX skills are like that, sans this systemd mess I suppose I'll get over one day. But that's kind of a rare thing in UNIX land, and it will pass, or get sorted, whatever. One of the reasons I keep an old 8 bit machine around is that simple, static nature. I can turn on the Apple 2, drop right into the built in assembler, knock something out, run it, be done, move on, using understanding I've had so long I don't even remember when I got it. It's just there.
On P1, I can be away from it for a long time, and in a day, get back to a place where I can put it to use, get something done, and move on.
When I come back to browser things, my browser needs an update, or something or other needs to be fetched, or I need a different one, or some security thing or other is a complete bother... Hate it. Unless I use it every single day, and whoever made the thing understands the value of features and the cost of relearning / adjusting. Then it's much less of a worry. Big changes that don't really do much, maybe follow a trend or other, cost me time and learning that I could be applying to what it is I want to do instead.
So, that's why keyboard / mouse and or serial and straight ASCII make sense as a baseline self-host target. We can always add a ton of stuff, and pile it onto an SD card. And we should too.
But the constant, works no matter what should be simple, lean, capable, IMHO.
If it were me, I would make that minimum assembler, editor, monitor, and provide hooks for things to be made in the future to be used easily and in a seamless way. PASM, with hubexe is pretty nice. It's all one might need for a lot of simple, "what does this do?" type programs.
Say you had BASIC or a FORTH. A monitor command, patched in when those get loaded, would allow an easy entry into either of those, and those could allow a drop back to the monitor, and either could use the assembler as needed.
So, if you've only got the chip, you've got enough to bootstrap anything you want. Literally, just type it in, if you want to. And do that from anything ever made that will support a simple serial connection.
That can be something dead simple, like "assemble to RAM" and let the user worry about where to put it. The monitor can write it to the terminal, for saving. That's enough to get the basics done. Someone with a simple block storage device could build something nicer in pieces, stash them there, then load it all in, etc...
I was starting down the road of exploring doing this on the P2 hot chip as a proof of concept. Didn't have an assembler on the chip, but I did very successfully, assemble some things, load them in and run them. Assemble more things, load those in, run those. Write it all out to the screen, and an application could be loaded back in just fine, using just what was on the chip. Doing things like moving from a TV display to VGA, while a program was running, worked just fine! Did it with the fractal program I did. Kill the video COG, stream in the other display code, launch the COG, and it all picked up where it left off.
Peter can do that kind of thing on his running FORTH system all the time, and with a lot less technical detail needing to line up just right.
If we had monitor, assembler, editor on the thing, similar approaches would be possible. This would give people the option of working, "in-situ" as opposed to a big build, boot, load, run, batch type approach we see as the vast majority use case today.
If we end up doing SD for example, well then boot from that, and the pre-planned hooks mean being able to load filesystem support, and other options, SPIN, DEBUG, C, whatever one wants to do. That setup could be used to develop for other chips, including P1 at some point, and it's pretty nice. Build whatever you want above LOMEM or some other basic scheme, and it's all gonna happen pretty easy. Build it in pieces, if you want. Write them all out to storage, and either load them in, or link 'em, or load dynamically, whatever makes sense.
For really big things, people will want to batch it. It's hard to use the chip, do that, build image, boot, test, reboot, load tools, etc... A more powerful, external system is clearly the way to go.
But, for a lot of smaller things, or learning? Doing it on the device, live, while you've got tools available, and things are running can be really educational and damn cool. An example might be having a capture / display type program monitoring some pins, showing activity of various kinds. Hook that sensor, or whatever it is right up, see the activity, then write a little code, run it on a free COG, and see the activity both ways, all with little more than some software and time.
In simple terms, that's precisely how the Apple 2 was done years ago. Those things got used for a lot of cross development, because they were open and designed to be used in these kinds of ways. They ship with editor, BASIC, monitor, line-assembler in the box. Once, just for grins, I did type in a DOS and write it to a disk. Took a couple hours, and then I could boot from that disk, do other things...
For something awesome at this, like a FORTH, one can literally assemble a small kernel and whatever it needs to talk to the outside world. Once that is done, send it a dictionary, and it's doing a lot of stuff in a pretty short amount of time.
That kind of thing is what "self-hosted" means to me.
You start with basic comms and build out whatever you want, or just write what you need to write and then it's there working. Whatever the basics are, they are present, robust, and pretty much the minimums needed to do anything else.
And they are optional.
One could ignore all of that, get a PC, whatever, and pretend that stuff is never there. Since it's a copy to RAM scenario, that I wish we had a "ignore writes to low 16K RAM" bit for, ignoring it is super easy. And the non-self hosted tools are either a much larger subset of the hosted tools, or are something else entirely. People will make whatever they want to make. And those may change a lot, or not much at all, depending too.
One use case for the self-hosted scenario is a sort of "bench computer" where it's got the basics on board. You don't do big stuff with it. You do use it for all sorts of little things, saving off tools you make as you go, or sharing them with others using a similar setup.
Signal generator, lo-fi scope, logic analyzer, data logger, test / measure with some simple additional circuits, calculator, debugger, visual and aural indicator / monitor, etc... all there in one nice, simple machine. I would use the Smile out of something like that when learning new parts, sensors, or in tandem with stuff I build, or things I might want to repair / hack on...
Here's one, I did a while back with someone. We had a funky IR controller. Just wanted to plot it's signal out on something to be evaluated to see the patterns and get some idea of what protocol it was using. Didn't have a lot of fancy stuff. A P1 did this job nicely enough. Was pretty easy. Or maybe one might want to see some data represented in some simple way, colors, lines, whatever. That's easy stuff too, and pretty useful at times, particularly times when one doesn't have other gear, or has to go and sort through a LOT of stuff, it's changes, updates, dependencies (browser anyone?) etc... just to plot a few dots on a screen, or make a sound.
I've used P1 a lot for this kind of thing. And frankly, the PC and all that jazz can get in the way.
The idea there is simple, consistent, powerful. If you have one, type in a few lines, and it does something, or run a little thing and it does something else. If you don't have one, take a day or two and make one.
Again, that's what the self-hosted path means to me. Maybe totally native makes sense, so go keyboard, mouse, monitor or LCD. Fine. Or maybe, serial makes the best sense. There it is, in a little window, and if you want, it's video output in another little window (which is why I like TV drivers and how they display dead simple using a capture card, or some other basic device), and it's all there, plus your forum, PDF files, etc... too.
If one is working above that, or in tandem with more complex things, it's not likely to make sense. Get a PC, tablet, whatever and go. No worries. Same goes for people using expensive, or complex tool sets they are accustomed to.
On the other hand nothing ever gets broken. The standards committees for HTTP, HTML, CSS, JS are constrained by the requirement for backward compatibility. They can never do something that would break the web. That HTML and JS from 1996 will still work in your browser today.
Certainly things like Java, FLASH have given a lot of trouble. None of those are WEB standards. Then there is all the junk MS put in to the browser, not WEB standards either.
MS has recently made the bold move of introducing the Edge browser that has all their proprietary Smile removed and a ton of standards compliance added. And making it the default on Win 10. There is a great video about that here if you have a minute. It's like 25 minute long public apology for all the wrongs of Internet Explorer over the years. Stunning:
Yep, that's what I'm getting at. How long would it last? See above. It's been working for twenty years so I see it working well into the future.
Having said all that. I also love the simplicity of a serial connection. If a self hosting system, in the traditional manner, can be implemented on the Prop I'm all for that too.
Both approaches are way to get what I really want which is getting rid of platform specific compilers, IDE's and other tools. I should be able to program a Prop given nothing but the Prop board and something with a screen, keyboard and mouse. Any time any place. Be it Mac, Windows, Linux or whatever else comes down the pipe. No binary installation, no rebuild of tools, no driver installation.
I'll just wait and see on the browser. Yes, the intent is right, but the stuff I have always needed has always never actually worked longer term.
If that changes? Well, maybe. It's gotta work for a while, and I really don't want to think about how to make it work either. That is the bar for me on browser tools.
Gives us an example a one of your non-working web needs.
JS in the browser has been as stable a platform for twenty years. There are plenty of natively compiled apps that have broken when the OS changed under them in that time.
The Prop Tool is a classic example. No use to the new Mac using generation. Without anyway forward without scraping it and going the cross-platform open source route with PropellerIDE and so on.
LOL.
1. VGA text display
2. SD FAT32
3. Tachyon and assembler
4. vi style editor
5. PS/2 keyboard
6. Sound + wave player or signal generator
7. SPLAT logic analyser
8. WIZnet server and email client
I/O-wise on a P1 this would consume a lot but we could share some lines. However at the worst without sharing we would need 23 altogether including I2C and serial. That would still leave a minimum of 9 I/O, not too bad. By sharing pins we could gain another 5 pins or so.
I believe it is quite possible to write a Basic in Forth as well plus if we can spare a little RAM and a cog then the display could upgraded to 80x25 OR 80x40.
This is definitely do'able so if anyone would really like to have Basic then it's simply a matter of writing one from scratch or else write it in Tachyon and gain access to all the filesystem and networking etc.
Just to be really cool about this I could make it work on the ultra-tiny +P8 module that piggby backs onto the IoT5500 WIZnet module. However I do have many other boards that are better suited which I may just try.