@heater, "What is all this nonsense about "sketches"? Why don't they talk about extending your C++ program? "
Whatever the exact semantic hypnosis that the Arduinites performed, it worked, convincing huge numbers of near-luddites, who thought that "programs" were synthetic and "hardware" was corporate, that "sketches" were cool and "shields" were organic. By the time they realized that they were writing C code and building systems, they were OK with it. That's one amazing accomplishment with all sorts of consequences. Sometimes smart works best.
@heater, "What is all this nonsense about "sketches"? Why don't they talk about extending your C++ program? "
Whatever the exact semantic hypnosis that the Arduinites performed, it worked, convincing huge numbers of near-luddites, who thought that "programs" were synthetic and "hardware" was corporate, that "sketches" were cool and "shields" were organic. By the time they realized that they were writing C code and building systems, they were OK with it. That's one amazing accomplishment with all sorts of consequences. Sometimes smart works best.
If one is wary of Spin, Forth on a Propeller is an excellent entry point. And pfth is likely the best for the rank beginner.
Ardunio seems to have proven it isn't the hardware, the silicon, or the software that sell micro-controllers to the public.. it is pure hubris... which provides the hook that captures their imagination. And so, I was hoping that F# was yet another Forth for the Propeller... but my hopes are in vain.
~~~~
I had a lot of trouble with Spin and the Propeller from the onset. Gave up as those that seemed to 'get it' were not able to or uninterested in supporting the slower learners. Everyone was directed to OBEX for a solution. And then Arduino crept in.
But I am happy to say that using Forth on the Propeller has been an excellent entry point to start over. I can even now comprehend why and where the Stacks are in the Propeller.. after all Forth is all about stacks. Maybe C with make the Propeller2 easy to adopt and fun, but there is a place for an interpreted language in learning new silicon and how to get the most out of programing.
I think there is a place for interactive languages for micros.
I remember working on the B1-B decades back, IBM had a flight line cart with a keyboard and screen along with a interface box that hooked to part of the avionics suite. Well this box had a variant of BASIC that allowed technicians to interactively issue commands to the IBM avionics to turn on and off certain subsystems, do queries, etc. Our HP1000's had a similar system set up when running diagnostics on the plane. It was a nice system to use and easy to learn.
Simple languages aimed at non-professionals are fun, Ladder Logic was another. It allowed to non-ring zero gurus to set up and test all sorts of mechanized systems and instrumentation without having to consult some code jockey who isn't at the plant or worse isn't available on a 7x24 basis.
Raining at the ski resort today so no conversations on the ski lift about programming languages. But that means coding time
I spent the day pulling apart the qz80 emulation. Darn, if you want a clever piece of code, this is it. We have LMM (a clever piece of code in itself, that enlarges the 2k cog space) and then some amazingly nifty code that exploits the way many of the Z80 instructions have patterns of bytes that determine which register is being modified. This is translated into pasm with bit shifts, and with some self modifying code magic, some Z80 instructions can be emulated in only one or two pasm instructions.
I'm comparing this with 'high level" languages like vb.net, C#, F# and (probably) javascript. Take a jump table. You have 256 instructions and you have a number and based on that number you want to jump to a routine. Worst case scenario with a high level language, you add in a "Select Case"/Switch type of statement and it could be 256 checks before you get the right answer. Old Basic used to have a "ON GOTO" structure that would have been quicker but goto is uncool. CP/M uses a couple of lines of self modifying code to do these jumps, with some fake jumps coded in.
So this is a little plug for Pasm, which does all this in a couple of instructions, which I don't even pretend to understand, but I think it is also self modifying code.
But back on topic re post #21
If I could type in this in a F#/Spin hybrid on a Prop2 I would be happy
let myvalue = PortA.input(mask)
if myvalue.Equals(7) then
PortB.output.value := 1
else
PortB.output.value := 0
There is more to that code, because every time you hit that . the IDE autocompletes all the methods that are available. That is valuable to me because it saves time searching Help on the internet. If you know that 'strings' is a class, you don't really need to read the help - just type strings. and look at what comes up.
F# et al have all that as part of the IDE and it is a huge reason I keep coming back to microsoft's languages, despite all their flaws.
Propeller languages can add this to the IDEs. A typical Spin project has included files, and it would be pretty easy to scan through all the methods using a quick 'pre compiler' and then adding some sort of autocomplete.
In a general sense, what I like about microsoft's IDEs/languages is i) they are free and ii) it is possible to write a piece of code that is free of syntax errors before even running it the first time.
I don't know how hard this sort of thing would be to add to the propeller languages, but I think it could make learning those languages easier.
addit:
tiny example - some sort of reminder in pasm to add the # and to watch for constant numbers more than 511.
Take a jump table. You have 256 instructions and you have a number and based on that number you want to jump to a routine. Worst case scenario with a high level language, you add in a "Select Case"/Switch type of statement and it could be 256 checks before you get the right answer
Not so. Actually your worst case scenario is actually the best case scenario. Any decent compiler of languages like C, Pascal etc will not generate a test for each of 256 different values but rather it will generate a jump table pretty much as you see in that CP/M assembler code.
I have just been playing around with this in GCC. It does it's best to use a jump table even if:
a) You don't use all the values from 0 to 255.
b) You don't start from zero.
c) You have "gaps" in the switch variable values.
Have a look at this switch in C code and the result x86 assembler output.
volatile char x;
volatile char y;
int main (int argc, char* argv[])
{
switch (x)
{
case 100:
y = 0;
break;
case 101:
y = 1;
break;
case 102:
y = 2;
break;
case 104:
y = 4;
break;
case 105:
y = 5;
break;
case 106:
y = 6;
break;
default:
y = 255;
break;
}
return(0);
}
This is how F# deals with case , a | (pipe) pattern match
F# uses |> as a pipe forward so the result goes to the next function,
so you read it on the screen in same order as it happens and not: if (function(inside function)) etc
type Color =
| Red = 0
| Green = 1
| Blue = 2
let printColorName (color:Color) =
match color with
| Color.Red -> printfn "Red"
| Color.Green -> printfn "Green"
| Color.Blue -> printfn "Blue"
| _ -> ()
printColorName Color.Red
printColorName Color.Green
printColorName Color.Blue
Many of us who first loved the Prop are from some very old-school sectors of the programming universe. I learned to program on the Trash 80, but my real education came from the purchase of one of the original SDK-86 development training kits where, after soldering it together, I had to learn to program in hexadecimal concatenating the 3 bit to-from-instruction bits together to form a 16 bit code entered with a hex keypad and displayed on a small 7 segment display. I never did much useful with it, but I DID learn a lot about how the 8086 itself worked.
The Propeller has allowed many of us to regain access to those same close connections to the hardware that started with the 8088, 6800, 6502 etc. It has been wonderful fun, but most people wanting to DO SOMETHING with a computer don't have the time or the pure love of hardware that many of us here on the Parallax site have.
I became familiar with Parallax after they purchased the rights to the SX from Ubicom. I was astounded with the original concept/instruction preview data-sheets of the Propeller when it first was described. Once the Prop was a valid product, I designed it into a medical testing system I was working on. The Prop was a perfect fit for all of the control/overseer aspects of the design but the software of choice for the medical community is Labview.
Another engineer did the Labview part of the design and I was happy with it being that way, I had Tinkered with Labview and it seemed almost like Cheating to simply drag an object onto a screen and have all the code for that object automatically hooked up. I mean... I was a MANLY PROGRAMMER! You have to get your fingers dirty with bits and bytes and assembly code and compiling and all that MANLY PROGRAMMER stuff for it to really count. I mean... HECK! There are BIOLOGIST out there who can build complex working, laboratory instrumentation in a couple of hours with Labview!
About a year later I was consulting on the design for an implantable PH sensor... to my chagrin the researcher wanted the software written in Labview... So I had to lower my manly programming standards and learn it. Or at least enough to do what I needed to do. In short, I probably could have written the program in C++, but it would have taken me at least twice as long to finish the design. I'm not completely Sold on point and click programming, but I would like to throw the concept out here for all of you compiler Wizzes to consider the possibility of creating some sort of simple, open source, user extensible graphic based programming compiler for the TurboProp.
Everyone out there in this Brave new world is used to getting things done by pointing at things. LAPTOPS with KEYBOARDS are almost considered Gauche by the latest generation of kids brought up in the point and click universe. These Kids will be designing the next generation of Stuff and the financial make or break of Parallax MIGHT depend on how many P2's get designed into that Stuff.
How many of you have smart phones? An app that would let a smart phone program the P2 might open that window of opportunity for it to be used by a new generation of Kids who frankly don't give a damn if a few trillion cpu cycles get wasted if something does what they need it to do and it was easy to make it do it!
Here we are on the verge of actually having our wonderful new processor... and all the talk out here is for languages that require a huge learning curve before new users can do something useful with it. I have watched this group of programmers/users do AMAZING things when challenged.
Just for grins... could it be done? A (User) simple point and click programming language that lets all those Non manly programmers be lazy when they're in a hurry?
Everyone out there in this “Brave new world” is used to getting things done by pointing at things.
This might not be as true in general as it looks around you.
Over the past couple of months I've taken to watching a load of YouTube videos of Google technical presentations and the HTML5 developer conferences and so on.
These are the guys building, for example, the V8 JavaScript engine in Chrome or the new HTML5 features and so on.
Wow, guess what, most of them look like they are thirty years younger than me but they are not "pointing and clicking" at anything. They are hacking code in an editor as we always did. You hardly even see an IDE in use when they are hacking code out live on stage.
All is not lost yet:)
It's a bell curve... Developers on the right, strive to understand technology and drive technology forward. On the other side, folks want to point and click. There's a bunch of folks in the middle.
My Spinneret and W5200 endeavors showed me a world of technology, older technology, that I thought I knew - I was wrong. Now a few years older and wiser, I realize how much I don't know. And I'm getting dumber by day. One thing I learned, finding the roots of technology is extremely valuable.
Does it really matter if someone wants to implement F# on a Prop2? It sounds like a cool journey and a lot of learning on the way regardless of the outcome.
My Spinneret and W5200 endeavors showed me a world of technology, older technology, that I thought I knew - I was wrong.
I'm curious to know what it was that you thought you knew. If you see what I mean.
Does it really matter if someone wants to implement F# on a Prop2?
Not at all. If anyone wants to take on that challenge all power to them.
What worries me is the expectation that there should be an F# or a JavaScript or a Java or a Python for the Propeller without considering how unsuitable they are for the purposes for which the Propeller was built and the huge undertaking it might me to create such a thing.
I suggest everyone checkout http://www.espruino.com/ to see how this goes. It's a brilliant thing but is that what you want from a Propeller? Why not just get one of those supported ARM boards or go nuts and get a a RaspberryPi.
Sadly I did, Even before I saw your link here.
It's not a good read at all.
The guy makes an inordinately long post, worthy of Potatohead, to say basically nothing except:
1) ARM processors as featured in mobile devices are not as fast as the x86 in your PC/Laptop
2) Mobile devices don't have the huge amounts of RAM as your PC/Laptop
3) JavaScript is interpreted and garbage collected.
3) Therefore mobile apps using JavaScript are slow and horrible.
Wha, wha!
Tell us something we don't know.
Meanwhile guys have been doing amazing things with HTML and JavaScript on mobile devices. If you have an iPhone or recent Android device check this out: http://www.famo.us/ Three dimensional animation and a physics engine. Works faster on a phone that it does on the AMD64 computer with 4GB of RAM I am using now. (Note: It does not use webgl or canvas just regular HTML divs and JS)
Like the Parallax world, there are those who say X is impossible and then there are those who do X because no one told them it was impossible.
P.S. Potatohead, I often jibe at you for long posts but generally they have a lot more content to munch on than this guy. No harm meant.
Even before I saw your link here.
It's not a good read at all...
I finished reading it on the train this morning and I'm a little torn on it. It should probably be split into multiple articles because his individual arguments are pretty interesting but his final conclusion is far from bulletproof.
His point about GC being pathological when memory starts to run out is excellent, somewhat obvious, but still worth restating.
He's full of Smile when we talks about Moore's law being the only reason JS engines are fast now. He should try running IE6 on modern hardware to see how utterly wrong he is.
He's right to point out that an ARM CPU is an order of magnitude less powerful than a typical server, but that's not the whole story. Those SoC tend to be differently balanced wrt CPU power, memory bandwidth, GPU, storage bandwidth, network bandwidth/latency... it's not a reasonable comparison. Maybe that should have been his point: they can't be compared so they must be evaluated on separate criteria. (FWIW, since I'm new around these parts, I'll state that is why I'm interested in the Propeller: it's optimized for a totally different set of values than most computing platforms and that's really exciting. And Parallax is a real stand-up company and that's important to me these days.)
And his point about GC being bad for performance is ... interesting. Even if you're in a GC language with no control over GC, like Javascript, you can at least avoid aggravating the GC with wasteful allocation. Two pieces of GC code can do the same computation with very different memory pressure, just like manually managed memory. He doesn't address this reality at all. I agree that the lack of determinism with a GC is a problem for interactive code, but he isn't acknowledging that it's more complex than the cliche full-UI stutter.
The comments on that page pointed out what I think is likely to be the final word on the matter, with CPython as the example: for dynamic languages ARC for the common case and GC for less common reference loops is probably the best general solution.
If you have an iPhone or recent Android device check this out: http://www.famo.us/ Three dimensional animation and a physics engine. Works faster on a phone that it does on the AMD64 computer with 4GB of RAM I am using now. (Note: It does not use webgl or canvas just regular HTML divs and JS)
HTML, Javascript, and hardware accelerated CSS3 transforms. Not sure if that really has better performance characteristics and broader browser support than WebGL or HTML5 canvas. That is slick as hell, though. Lots of fertile ground being explored in the realm of CSS3 these days.
Oh I forgot mention one major point of the famo.us demo. There is no CSS3 in there! There is a video on YouTube where they explain that all that HTML5 CSS stuff is too slow, so they did the animations in JS. Hence the inclusion of a physics engine to do all the element inertia and bouncing etc.
The GC problem is complex. I have written equivalent code in JavaScript and Google's Go language. Simple thing parses an XML stream input from a socket. Despite the fact that Go is compiled to native code it was a few times slower than node.js and suffered from visible stuttering as the GC kicked in. The JS version was smooth and fast. WTF?. My similar code in C++ is not much quicker.
Perhaps they're doing the coordinate calculations in JS, but the CSS at least has "-webkit-transform: matrix3d(....)" on Chrome. The implication is interesting, that CSS transitions were slow but the transforms were fast.
I know that most of us already know that garbage collection (and languages that require it) are not a good fit for applications that need determinism (and real time).
Off topic: I don't like the whole "push everything into the cloud" meme going around; I like having local processing power and storage. I mean really... "the cloud" is nothing more or less than the latest incarnation of thin clients serviced by remote servers. At least HTML5/JS does some local processing instead of shipping bitmaps around...
On topic: F# does not impress me - not that it needs to - I'll take C/C++, Python or JS over it any day. Most of my work is still in various assembly languages or C / C++ anyway
Sadly I did, Even before I saw your link here.
It's not a good read at all.
The guy makes an inordinately long post, worthy of Potatohead, to say basically nothing except:
1) ARM processors as featured in mobile devices are not as fast as the x86 in your PC/Laptop
2) Mobile devices don't have the huge amounts of RAM as your PC/Laptop
3) JavaScript is interpreted and garbage collected.
3) Therefore mobile apps using JavaScript are slow and horrible.
Wha, wha!
Tell us something we don't know.
Meanwhile guys have been doing amazing things with HTML and JavaScript on mobile devices. If you have an iPhone or recent Android device check this out: http://www.famo.us/ Three dimensional animation and a physics engine. Works faster on a phone that it does on the AMD64 computer with 4GB of RAM I am using now. (Note: It does not use webgl or canvas just regular HTML divs and JS)
Like the Parallax world, there are those who say X is impossible and then there are those who do X because no one told them it was impossible.
P.S. Potatohead, I often jibe at you for long posts but generally they have a lot more content to munch on than this guy. No harm meant.
If you take a major feature (like gc) out of a language where it is normally a critical feature, you just end up making enemies of people who would normally use the language regardless of the platform.
I would have to sit through the whole presentation video again to be sure but I thought they said they had bypassed the CSS. Big chunks of the browser processing having been observed to be much slower than their JS. Hence the need to build a physics engine.
I suspect a lot of the world might be reconsidering this "push everything into the cloud idea" now that they have become acutely aware that "the cloud" is actually the NSA. You can have MS-NSA, or Google-NSA or Amazon-NSA and so on. This whole snoop on everybody business might end up having a really bad effect on the economy of the NSA...sorry I mean USA.
Still "cloud" technology does have it's utility. I'd just like to have our servers on our premises and more under our control.
I have not really looked into F#. There are so many languages around now it's impossible to keep up with YAFL. I suspect that is why the world will coagulate around JS, it's everywhere already and no one has time to get serious about the multitude of other options. C/C++ will remain as that's what we implement the underlying infrastructure in.
Garbage collection is fine and of course many higher level languages absolutely require it. It's more than a convenient feature to save programmers who can't sort out their new/deletes. It enables higher level language constructs.
Garbage collection for real-time code is a no-no as you say. But I'm thinking that on a multi-core chip like the Prop we can tolerate a COG running a clunky garbage collected language for higher level functionality, we still have a bunch of COGs available for the hard real-time work. It's just an extension of the idea that we use the slow interpreted Spin language for convenience on the Prop already.
Absolutely no harm taken. And thanks! I like to read that once in a while. Appreciated. Good news too, because once the words start to flow... well, you all know.
In addition to the points Heater raised, I found the GC analysis notable given RAM constrained systems and real time requirements. (I can factor out the ego statements, etc... and just get the takeaways and be perfectly happy)
This:
It's optimized for a totally different set of values than most computing platforms and that's really exciting.
Yeah me too. Seconded.
Honestly, I still don't see this as a language problem, so much as I do an education one, and "template" / "cook book" one. A Prop is quite capable in experienced hands. Still easy to get in the weeds quick though. IMHO, that is the nut to crack on wider adoption. We will have SPIN, PASM, some sort of LMM capable assemblier or other, and C. And SPIN will now accept PASM "snippets" too. Happy days ahead, and I suspect more potential to present the hardware in ways people can get hold of and run with and experience fewer overall limits on a much larger scope of potential tasks.
There is a lot of work sorting out how to use XMM, LMM, COG, SPIN, C, threads to be done, packaged up and put out there for somebody to use more directly, before there is the next new language that will rope 'em in, IMHO of course.
I think there is a place for interactive languages for micros.
I remember working on the B1-B decades back, IBM had a flight line cart with a keyboard and screen along with a interface box that hooked to part of the avionics suite. Well this box had a variant of BASIC that allowed technicians to interactively issue commands to the IBM avionics to turn on and off certain subsystems, do queries, etc. Our HP1000's had a similar system set up when running diagnostics on the plane. It was a nice system to use and easy to learn.
Simple languages aimed at non-professionals are fun, Ladder Logic was another. It allowed to non-ring zero gurus to set up and test all sorts of mechanized systems and instrumentation without having to consult some code jockey who isn't at the plant or worse isn't available on a 7x24 basis.
I will go further and say that there is no only a place for an interactive language on the Propeller; it is a necessity for the educational and introductory modes of learning the Propeller.
What we do not need is another 700 page tome that tries to prove that this platform and this language is a better value. Long texts never get read; but interactive software can capture a new user and have then hooke in seconds...SECONDS!!!!!
That intereact language doesn't have to be Forth.. it could be Basic, or something else. It certain would hurt to have several interactive languages represented.. even Ladder Logic. The point is to gain new users and to give them the feeling that they can grow into knowing more without having to go elsewhere.
I will go further and say that there is no only a place for an interactive language on the Propeller; it is a necessity for the educational and introductory modes of learning the Propeller.
What we do not need is another 700 page tome that tries to prove that this platform and this language is a better value. Long texts never get read; but interactive software can capture a new user and have then hooke in seconds...SECONDS!!!!!
That intereact language doesn't have to be Forth.. it could be Basic, or something else. It certain would hurt to have several interactive languages represented.. even Ladder Logic. The point is to gain new users and to give them the feeling that they can grow into knowing more without having to go elsewhere.
I've been thinking about this, too. Modern video games are written so that the player is taught how to play the game, as he plays it. How to do that for a microcontroller is challenging. I suppose incremental goal-based objectives would be the key. Whatever microcontroller had that going for it could become pretty popular. People enjoy genuine challenges much more than gratuitous ones. Learning a microcontroller would be way more gratifying and edifying than a video game.
another take on this (which I think is already being done maybe via learn and easy ide) is to ship out a "complete driver set and/or library" for each propeller board that is supported in the SIDE board-drop list.
For example, if I were developing on a BS2 device, then I could just click the little chippie button on the menu bar and the program code would be set for that device.
Similarly, if I were to pick a board from the drop list, then why can't it "#include "the_selected_board.h" -- this header could be a library that included drivers/classes for each peripheral on the board.
It would be somewhat similar to the work being done by "one of the martins" on "porting 'duino libs to S-IDE".
At a higher level than that would be a GUI driven programming tool that would be a lot like the tool for the S2 robot (with the widget set offered be driven by the selected board).
But, none of these glam-tools should prevent me (or get in my way in any way) if I just want sling code myself.
I know, I know, it sounds like a lot -- and this is -- but if you really want to drag over the masses, you have to remember that a lot of people just want to move into a house, they don't want to build one. But, then there are other others that like to make sawdust and bend nails so they don't mind a fixerupper as much. And when you start to talk about "the masses" I think we have to realize just how broad of a swath that really is.
Interactivity rings true for me too. And I think we can get that on a P2 to actually be useful and potent.
Not like it will replace the non-interactive methods for pushing the edge or larger projects, but a whole lot could be done, similar to how a Raspi works.
When i think back to the great interactive experiences I had, the Apple 2 continues to be on my mind. That one was built to build with, including monitor, assembler, BASIC and a ROM full of goodies.
You dont need to have very much info to start exploring that one. A brief reference to the tools, keyword lists, CPU programming model, ROM listing, and CPU instruction set are it, unless you also want DOS and that is a little more. It can be put into a single book that isnt huge.
The trick there is the learn by doing types had all they needed to just go and do. Others needed and wanted more and of course that machine got documented rather well with a ton of language support.
With a robust and inclusive set of routines, an intrepeted language could do a lot.
One other thing strikes me and that is utility programs. Say one wants to see the audio spectrum, or plot the DAC values, or hear a sensor... maybe plot graphs and such. Those things were what people wrote to then write other things or solve problems. The concept of objects and snippets is good, but utilities might be good too, and those end up more game like in that somebody can run them and get results, tweak, change, etc... on their way to bigger things.
Goal based programming, now that is an interesting concept and really an excellent idea. You essentially need an interactive teacher. The first thing that pops into my mind is divide the screen into 3 sections:
1) code section at the bottom of the screen (where user types in code like a word processor). The user would type the code here, and then it would have to compile and run kind of like basic. Of course since you have this other teaching program ontop of it, the limitations would be much greater.
2) Display/output section (top right window). This is where the "result" of said code would be. You could have it be text output, fake LED lights, graphics, etc. These output windows would be set by the module so you can very your lessons.
3) Teaching section (top left window). This is a text display where it would teach you. Essentially it would display objectives, lines of code (for you to retype or have an option to auto-type it), and explanation of what the line of code does or why it is important.
So you would have this base teacher program that runs ontop of the Prop 2, and you could load up modules from SD card (which are the teaching programs and also set the output window), then have the ability to save/load these short snippets of code you type up.
I think it would be very key to have it run stand-alone on the Prop2 board itself and would be very doable.
I've been thinking about this, too. Modern video games are written so that the player is taught how to play the game, as he plays it. How to do that for a microcontroller is challenging. I suppose incremental goal-based objectives would be the key. Whatever microcontroller had that going for it could become pretty popular. People enjoy genuine challenges much more than gratuitous ones. Learning a microcontroller would be way more gratifying and edifying than a video game.
The language itself is something of a diversion, what is needed is an interactive environment
That is a software problem, and once you have the ducks in a row, almost any language is a breeze.
There are already systems like this out there, here is some of what is needed :
* Good editor with syntax highlighting
* A Chip simulator, that is ready, sitting behind the editor
* After build, you can watch any variable, and step to any editor line.
When you have the silly mistakes removed, try it in a chip.
A polished system would also allow a debug link to the target silicon, so that ICE-like operation on the same chip-view code that is running with the simulator is possible.
I've been thinking about this, too. Modern video games are written so that the player is taught how to play the game, as he plays it. How to do that for a microcontroller is challenging. I suppose incremental goal-based objectives would be the key. Whatever microcontroller had that going for it could become pretty popular. People enjoy genuine challenges much more than gratuitous ones. Learning a microcontroller would be way more gratifying and edifying than a video game.
Interesting you should say this. We're working on arranging the LittleRobot curriculum like a video game. Each skill it introduced an mastered until the user can approach the "boss task" at the end of a section, which would be the target application. The trick is doing this without (or maybe with) flashy graphics and sound affects. It doesn't have to be done using forth, thats just my choice for now. We are attempting to make the material in a comic book form, to get folks to maybe read them. Its not Mario, but its a start.
The language itself is something of a diversion, what is needed is an interactive environment
That is a software problem, and once you have the ducks in a row, almost any language is a breeze.
Didn't I hear some one talking about an interactive environment for the prop? Sounded like they put some work into it, and that it does its job quite well.
It uses any generic editor and syntax highlighting, I've used windows & linux.
It doesn't have a simulator, it just runs directly on the real prop, and uses one or more cogs to monitor the others
When the silly mistakes are removed, its already running on the chip, so we save that whole step. In-circuit emulator isn't really needed 90% of the time, and the last 10% tend to need re-engineering anyway.
But this thread is about F#. What is it about F# that delivers this in a better way?
Comments
Whatever the exact semantic hypnosis that the Arduinites performed, it worked, convincing huge numbers of near-luddites, who thought that "programs" were synthetic and "hardware" was corporate, that "sketches" were cool and "shields" were organic. By the time they realized that they were writing C code and building systems, they were OK with it. That's one amazing accomplishment with all sorts of consequences. Sometimes smart works best.
If one is wary of Spin, Forth on a Propeller is an excellent entry point. And pfth is likely the best for the rank beginner.
Ardunio seems to have proven it isn't the hardware, the silicon, or the software that sell micro-controllers to the public.. it is pure hubris... which provides the hook that captures their imagination. And so, I was hoping that F# was yet another Forth for the Propeller... but my hopes are in vain.
~~~~
I had a lot of trouble with Spin and the Propeller from the onset. Gave up as those that seemed to 'get it' were not able to or uninterested in supporting the slower learners. Everyone was directed to OBEX for a solution. And then Arduino crept in.
But I am happy to say that using Forth on the Propeller has been an excellent entry point to start over. I can even now comprehend why and where the Stacks are in the Propeller.. after all Forth is all about stacks. Maybe C with make the Propeller2 easy to adopt and fun, but there is a place for an interpreted language in learning new silicon and how to get the most out of programing.
I remember working on the B1-B decades back, IBM had a flight line cart with a keyboard and screen along with a interface box that hooked to part of the avionics suite. Well this box had a variant of BASIC that allowed technicians to interactively issue commands to the IBM avionics to turn on and off certain subsystems, do queries, etc. Our HP1000's had a similar system set up when running diagnostics on the plane. It was a nice system to use and easy to learn.
Simple languages aimed at non-professionals are fun, Ladder Logic was another. It allowed to non-ring zero gurus to set up and test all sorts of mechanized systems and instrumentation without having to consult some code jockey who isn't at the plant or worse isn't available on a 7x24 basis.
I spent the day pulling apart the qz80 emulation. Darn, if you want a clever piece of code, this is it. We have LMM (a clever piece of code in itself, that enlarges the 2k cog space) and then some amazingly nifty code that exploits the way many of the Z80 instructions have patterns of bytes that determine which register is being modified. This is translated into pasm with bit shifts, and with some self modifying code magic, some Z80 instructions can be emulated in only one or two pasm instructions.
I'm comparing this with 'high level" languages like vb.net, C#, F# and (probably) javascript. Take a jump table. You have 256 instructions and you have a number and based on that number you want to jump to a routine. Worst case scenario with a high level language, you add in a "Select Case"/Switch type of statement and it could be 256 checks before you get the right answer. Old Basic used to have a "ON GOTO" structure that would have been quicker but goto is uncool. CP/M uses a couple of lines of self modifying code to do these jumps, with some fake jumps coded in.
So this is a little plug for Pasm, which does all this in a couple of instructions, which I don't even pretend to understand, but I think it is also self modifying code.
But back on topic re post #21
There is more to that code, because every time you hit that . the IDE autocompletes all the methods that are available. That is valuable to me because it saves time searching Help on the internet. If you know that 'strings' is a class, you don't really need to read the help - just type strings. and look at what comes up.
F# et al have all that as part of the IDE and it is a huge reason I keep coming back to microsoft's languages, despite all their flaws.
Propeller languages can add this to the IDEs. A typical Spin project has included files, and it would be pretty easy to scan through all the methods using a quick 'pre compiler' and then adding some sort of autocomplete.
In a general sense, what I like about microsoft's IDEs/languages is i) they are free and ii) it is possible to write a piece of code that is free of syntax errors before even running it the first time.
I don't know how hard this sort of thing would be to add to the propeller languages, but I think it could make learning those languages easier.
addit:
tiny example - some sort of reminder in pasm to add the # and to watch for constant numbers more than 511.
I have just been playing around with this in GCC. It does it's best to use a jump table even if:
a) You don't use all the values from 0 to 255.
b) You don't start from zero.
c) You have "gaps" in the switch variable values.
Have a look at this switch in C code and the result x86 assembler output.
F# uses |> as a pipe forward so the result goes to the next function,
so you read it on the screen in same order as it happens and not: if (function(inside function)) etc
The Propeller has allowed many of us to regain access to those same close connections to the hardware that started with the 8088, 6800, 6502 etc. It has been wonderful fun, but most people wanting to DO SOMETHING with a computer don't have the time or the pure love of hardware that many of us here on the Parallax site have.
I became familiar with Parallax after they purchased the rights to the SX from Ubicom. I was astounded with the original concept/instruction preview data-sheets of the Propeller when it first was described. Once the Prop was a valid product, I designed it into a medical testing system I was working on. The Prop was a perfect fit for all of the control/overseer aspects of the design but the software of choice for the medical community is Labview.
Another engineer did the Labview part of the design and I was happy with it being that way, I had Tinkered with Labview and it seemed almost like Cheating to simply drag an object onto a screen and have all the code for that object automatically hooked up. I mean... I was a MANLY PROGRAMMER! You have to get your fingers dirty with bits and bytes and assembly code and compiling and all that MANLY PROGRAMMER stuff for it to really count. I mean... HECK! There are BIOLOGIST out there who can build complex working, laboratory instrumentation in a couple of hours with Labview!
About a year later I was consulting on the design for an implantable PH sensor... to my chagrin the researcher wanted the software written in Labview... So I had to lower my manly programming standards and learn it. Or at least enough to do what I needed to do. In short, I probably could have written the program in C++, but it would have taken me at least twice as long to finish the design. I'm not completely Sold on point and click programming, but I would like to throw the concept out here for all of you compiler Wizzes to consider the possibility of creating some sort of simple, open source, user extensible graphic based programming compiler for the TurboProp.
Everyone out there in this Brave new world is used to getting things done by pointing at things. LAPTOPS with KEYBOARDS are almost considered Gauche by the latest generation of kids brought up in the point and click universe. These Kids will be designing the next generation of Stuff and the financial make or break of Parallax MIGHT depend on how many P2's get designed into that Stuff.
How many of you have smart phones? An app that would let a smart phone program the P2 might open that window of opportunity for it to be used by a new generation of Kids who frankly don't give a damn if a few trillion cpu cycles get wasted if something does what they need it to do and it was easy to make it do it!
Here we are on the verge of actually having our wonderful new processor... and all the talk out here is for languages that require a huge learning curve before new users can do something useful with it. I have watched this group of programmers/users do AMAZING things when challenged.
Just for grins... could it be done? A (User) simple point and click programming language that lets all those Non manly programmers be lazy when they're in a hurry?
Over the past couple of months I've taken to watching a load of YouTube videos of Google technical presentations and the HTML5 developer conferences and so on.
These are the guys building, for example, the V8 JavaScript engine in Chrome or the new HTML5 features and so on.
Wow, guess what, most of them look like they are thirty years younger than me but they are not "pointing and clicking" at anything. They are hacking code in an editor as we always did. You hardly even see an IDE in use when they are hacking code out live on stage.
All is not lost yet:)
My Spinneret and W5200 endeavors showed me a world of technology, older technology, that I thought I knew - I was wrong. Now a few years older and wiser, I realize how much I don't know. And I'm getting dumber by day. One thing I learned, finding the roots of technology is extremely valuable.
Does it really matter if someone wants to implement F# on a Prop2? It sounds like a cool journey and a lot of learning on the way regardless of the outcome.
What worries me is the expectation that there should be an F# or a JavaScript or a Java or a Python for the Propeller without considering how unsuitable they are for the purposes for which the Propeller was built and the huge undertaking it might me to create such a thing.
I suggest everyone checkout http://www.espruino.com/ to see how this goes. It's a brilliant thing but is that what you want from a Propeller? Why not just get one of those supported ARM boards or go nuts and get a a RaspberryPi.
It's not a good read at all.
The guy makes an inordinately long post, worthy of Potatohead, to say basically nothing except:
1) ARM processors as featured in mobile devices are not as fast as the x86 in your PC/Laptop
2) Mobile devices don't have the huge amounts of RAM as your PC/Laptop
3) JavaScript is interpreted and garbage collected.
3) Therefore mobile apps using JavaScript are slow and horrible.
Wha, wha!
Tell us something we don't know.
Meanwhile guys have been doing amazing things with HTML and JavaScript on mobile devices. If you have an iPhone or recent Android device check this out: http://www.famo.us/ Three dimensional animation and a physics engine. Works faster on a phone that it does on the AMD64 computer with 4GB of RAM I am using now. (Note: It does not use webgl or canvas just regular HTML divs and JS)
Like the Parallax world, there are those who say X is impossible and then there are those who do X because no one told them it was impossible.
P.S. Potatohead, I often jibe at you for long posts but generally they have a lot more content to munch on than this guy. No harm meant.
I finished reading it on the train this morning and I'm a little torn on it. It should probably be split into multiple articles because his individual arguments are pretty interesting but his final conclusion is far from bulletproof.
His point about GC being pathological when memory starts to run out is excellent, somewhat obvious, but still worth restating.
He's full of Smile when we talks about Moore's law being the only reason JS engines are fast now. He should try running IE6 on modern hardware to see how utterly wrong he is.
He's right to point out that an ARM CPU is an order of magnitude less powerful than a typical server, but that's not the whole story. Those SoC tend to be differently balanced wrt CPU power, memory bandwidth, GPU, storage bandwidth, network bandwidth/latency... it's not a reasonable comparison. Maybe that should have been his point: they can't be compared so they must be evaluated on separate criteria. (FWIW, since I'm new around these parts, I'll state that is why I'm interested in the Propeller: it's optimized for a totally different set of values than most computing platforms and that's really exciting. And Parallax is a real stand-up company and that's important to me these days.)
And his point about GC being bad for performance is ... interesting. Even if you're in a GC language with no control over GC, like Javascript, you can at least avoid aggravating the GC with wasteful allocation. Two pieces of GC code can do the same computation with very different memory pressure, just like manually managed memory. He doesn't address this reality at all. I agree that the lack of determinism with a GC is a problem for interactive code, but he isn't acknowledging that it's more complex than the cliche full-UI stutter.
The comments on that page pointed out what I think is likely to be the final word on the matter, with CPython as the example: for dynamic languages ARC for the common case and GC for less common reference loops is probably the best general solution.
HTML, Javascript, and hardware accelerated CSS3 transforms. Not sure if that really has better performance characteristics and broader browser support than WebGL or HTML5 canvas. That is slick as hell, though. Lots of fertile ground being explored in the realm of CSS3 these days.
Oh I forgot mention one major point of the famo.us demo. There is no CSS3 in there! There is a video on YouTube where they explain that all that HTML5 CSS stuff is too slow, so they did the animations in JS. Hence the inclusion of a physics engine to do all the element inertia and bouncing etc.
The GC problem is complex. I have written equivalent code in JavaScript and Google's Go language. Simple thing parses an XML stream input from a socket. Despite the fact that Go is compiled to native code it was a few times slower than node.js and suffered from visible stuttering as the GC kicked in. The JS version was smooth and fast. WTF?. My similar code in C++ is not much quicker.
Off topic: I don't like the whole "push everything into the cloud" meme going around; I like having local processing power and storage. I mean really... "the cloud" is nothing more or less than the latest incarnation of thin clients serviced by remote servers. At least HTML5/JS does some local processing instead of shipping bitmaps around...
On topic: F# does not impress me - not that it needs to - I'll take C/C++, Python or JS over it any day. Most of my work is still in various assembly languages or C / C++ anyway
What I find it strange is to try to use gc dependant languages, with the inherent "stuttering" due to gc, for hard real time apps.
I would have to sit through the whole presentation video again to be sure but I thought they said they had bypassed the CSS. Big chunks of the browser processing having been observed to be much slower than their JS. Hence the need to build a physics engine.
Here is the video in question if you are interested: https://www.youtube.com/watch?v=f48_wJjmEw0
@Bill Henning
I suspect a lot of the world might be reconsidering this "push everything into the cloud idea" now that they have become acutely aware that "the cloud" is actually the NSA. You can have MS-NSA, or Google-NSA or Amazon-NSA and so on. This whole snoop on everybody business might end up having a really bad effect on the economy of the NSA...sorry I mean USA.
Still "cloud" technology does have it's utility. I'd just like to have our servers on our premises and more under our control.
I have not really looked into F#. There are so many languages around now it's impossible to keep up with YAFL. I suspect that is why the world will coagulate around JS, it's everywhere already and no one has time to get serious about the multitude of other options. C/C++ will remain as that's what we implement the underlying infrastructure in.
Garbage collection is fine and of course many higher level languages absolutely require it. It's more than a convenient feature to save programmers who can't sort out their new/deletes. It enables higher level language constructs.
Garbage collection for real-time code is a no-no as you say. But I'm thinking that on a multi-core chip like the Prop we can tolerate a COG running a clunky garbage collected language for higher level functionality, we still have a bunch of COGs available for the hard real-time work. It's just an extension of the idea that we use the slow interpreted Spin language for convenience on the Prop already.
In addition to the points Heater raised, I found the GC analysis notable given RAM constrained systems and real time requirements. (I can factor out the ego statements, etc... and just get the takeaways and be perfectly happy)
This:
It's optimized for a totally different set of values than most computing platforms and that's really exciting.
Yeah me too. Seconded.
Honestly, I still don't see this as a language problem, so much as I do an education one, and "template" / "cook book" one. A Prop is quite capable in experienced hands. Still easy to get in the weeds quick though. IMHO, that is the nut to crack on wider adoption. We will have SPIN, PASM, some sort of LMM capable assemblier or other, and C. And SPIN will now accept PASM "snippets" too. Happy days ahead, and I suspect more potential to present the hardware in ways people can get hold of and run with and experience fewer overall limits on a much larger scope of potential tasks.
There is a lot of work sorting out how to use XMM, LMM, COG, SPIN, C, threads to be done, packaged up and put out there for somebody to use more directly, before there is the next new language that will rope 'em in, IMHO of course.
I will go further and say that there is no only a place for an interactive language on the Propeller; it is a necessity for the educational and introductory modes of learning the Propeller.
What we do not need is another 700 page tome that tries to prove that this platform and this language is a better value. Long texts never get read; but interactive software can capture a new user and have then hooke in seconds...SECONDS!!!!!
That intereact language doesn't have to be Forth.. it could be Basic, or something else. It certain would hurt to have several interactive languages represented.. even Ladder Logic. The point is to gain new users and to give them the feeling that they can grow into knowing more without having to go elsewhere.
I've been thinking about this, too. Modern video games are written so that the player is taught how to play the game, as he plays it. How to do that for a microcontroller is challenging. I suppose incremental goal-based objectives would be the key. Whatever microcontroller had that going for it could become pretty popular. People enjoy genuine challenges much more than gratuitous ones. Learning a microcontroller would be way more gratifying and edifying than a video game.
For example, if I were developing on a BS2 device, then I could just click the little chippie button on the menu bar and the program code would be set for that device.
Similarly, if I were to pick a board from the drop list, then why can't it "#include "the_selected_board.h" -- this header could be a library that included drivers/classes for each peripheral on the board.
It would be somewhat similar to the work being done by "one of the martins" on "porting 'duino libs to S-IDE".
At a higher level than that would be a GUI driven programming tool that would be a lot like the tool for the S2 robot (with the widget set offered be driven by the selected board).
But, none of these glam-tools should prevent me (or get in my way in any way) if I just want sling code myself.
I know, I know, it sounds like a lot -- and this is -- but if you really want to drag over the masses, you have to remember that a lot of people just want to move into a house, they don't want to build one. But, then there are other others that like to make sawdust and bend nails so they don't mind a fixerupper as much. And when you start to talk about "the masses" I think we have to realize just how broad of a swath that really is.
Not like it will replace the non-interactive methods for pushing the edge or larger projects, but a whole lot could be done, similar to how a Raspi works.
When i think back to the great interactive experiences I had, the Apple 2 continues to be on my mind. That one was built to build with, including monitor, assembler, BASIC and a ROM full of goodies.
You dont need to have very much info to start exploring that one. A brief reference to the tools, keyword lists, CPU programming model, ROM listing, and CPU instruction set are it, unless you also want DOS and that is a little more. It can be put into a single book that isnt huge.
The trick there is the learn by doing types had all they needed to just go and do. Others needed and wanted more and of course that machine got documented rather well with a ton of language support.
With a robust and inclusive set of routines, an intrepeted language could do a lot.
One other thing strikes me and that is utility programs. Say one wants to see the audio spectrum, or plot the DAC values, or hear a sensor... maybe plot graphs and such. Those things were what people wrote to then write other things or solve problems. The concept of objects and snippets is good, but utilities might be good too, and those end up more game like in that somebody can run them and get results, tweak, change, etc... on their way to bigger things.
Utilities like:
Text editor, for assembling / compiling if desired.
Hex editor for SD/Flash, etc..
Function generator, maybe with RAM to DAC and DAC capture
Terminal
1) code section at the bottom of the screen (where user types in code like a word processor). The user would type the code here, and then it would have to compile and run kind of like basic. Of course since you have this other teaching program ontop of it, the limitations would be much greater.
2) Display/output section (top right window). This is where the "result" of said code would be. You could have it be text output, fake LED lights, graphics, etc. These output windows would be set by the module so you can very your lessons.
3) Teaching section (top left window). This is a text display where it would teach you. Essentially it would display objectives, lines of code (for you to retype or have an option to auto-type it), and explanation of what the line of code does or why it is important.
So you would have this base teacher program that runs ontop of the Prop 2, and you could load up modules from SD card (which are the teaching programs and also set the output window), then have the ability to save/load these short snippets of code you type up.
I think it would be very key to have it run stand-alone on the Prop2 board itself and would be very doable.
The language itself is something of a diversion, what is needed is an interactive environment
That is a software problem, and once you have the ducks in a row, almost any language is a breeze.
There are already systems like this out there, here is some of what is needed :
* Good editor with syntax highlighting
* A Chip simulator, that is ready, sitting behind the editor
* After build, you can watch any variable, and step to any editor line.
When you have the silly mistakes removed, try it in a chip.
A polished system would also allow a debug link to the target silicon, so that ICE-like operation on the same chip-view code that is running with the simulator is possible.
Interesting you should say this. We're working on arranging the LittleRobot curriculum like a video game. Each skill it introduced an mastered until the user can approach the "boss task" at the end of a section, which would be the target application. The trick is doing this without (or maybe with) flashy graphics and sound affects. It doesn't have to be done using forth, thats just my choice for now. We are attempting to make the material in a comic book form, to get folks to maybe read them. Its not Mario, but its a start.
Didn't I hear some one talking about an interactive environment for the prop? Sounded like they put some work into it, and that it does its job quite well.
It uses any generic editor and syntax highlighting, I've used windows & linux.
It doesn't have a simulator, it just runs directly on the real prop, and uses one or more cogs to monitor the others
When the silly mistakes are removed, its already running on the chip, so we save that whole step. In-circuit emulator isn't really needed 90% of the time, and the last 10% tend to need re-engineering anyway.
But this thread is about F#. What is it about F# that delivers this in a better way?