Not if you actually USE the correct type to start with.
Yes, good, it's better if you know the types you language deals with.
But now, what if the valid range of some quantity in my program is an integer in the range 3 to 29 inclusive?
Oops now I'm back to having to check that range in my code in order to ensure a valid program.
Pascal does not do that. Might as well use a type free language if I have to do all that work manually.
Ada on the other hand does allow one to define such types, and the compiler checks all of that.
Great, but it's such a pain to use that nobody does any more.
Pascal is pointless whichever way you look at it.
JS is not a Microcontroller optimal language,
I agree to a large extent. However it turns out that very low power, physically small and cheap solutions can be created on a micro-controller using JS. See Espruino.
Of course the "hard parts" are done in C/C++ for efficiency by somebody skilled in the arts. That only needs to be done once. After that a billion people can use it with the simplicity of JS.
re:Yeah, I think we need to import "crt" or whatever it is that defines the I/O functions-
I can't remember the syntax for that. "import", "include", "with", "use", "require" ?
"uses" console from the W3C library
i.e. uses
W3C.Console
Also I commented out a few statements yesterday on your Fibonacci code yesterday and got most of it to compile . At least I now know which line's are causing the problem. Also, yesterday posted the problem on the SMS forum to see if they could spot the reason it won't compile.
re:That seems to be the key advantage which makes this less of an ideal fit for P2, which is still a Microcontoller, not a JS engine.
Correct. However, When you use a Prop chip in something like a home automation controller and want to control it remotely, your best option is JavaScript running in a browser or you can pair the Prop with another board such as a Raspberry Pi and use the Pi for the Java Script client. Either way you it would be best to do it with web standards so you have to figure out how to interface it via JavaScript, JSON, Node.js etc.
If you look at the Open Systems Interconnect(OSI) model for the Web OF Things you will see OSI layer 7 which is HTTP, JSON, Web Sockets which is your every day web standards.
To see how the web of things work's for the Raspberry PI you can study this source code:
Micro-controllers like the ESP whatever WIFI devices are running Javascript. Micro-controllers are programmed in JS, Python, Java. Tiny little machines are running full up operating systems like Linux.
The Propeller is not going to make a useful JS engine. But the P2 might. With the bonus that even if the JS is slow you still have 15 other cores for that good old fashioned real-time, micro-controller stuff.
Now about this SMS thing...
If it can't convert regular Pascal to JS that would be a bit of a problem. Nobody is writing new Pascal but they might have legacy code they would like to use.
That big-fibonacci program is pretty neat. It's not the recursive fibo though.
If I said Prop above I actually meant Prop 2 we already know that Prop1 can't do the job . I'll play with the Fibonacci code again on the weekend. There is only one line that seems to be holding it up now and could even be a compiler setting thing.
If it can't convert regular Pascal to JS that would be a bit of a problem. Nobody is writing new Pascal but they might have legacy code they would like to use.
We have an answer from the Smart Mobile team - gabr42
The reason for this error is quite simple - DWScript doesn't support assigning result to function by assigning it to its name (fib := 1). You have to assign it to the pseudo-variable Result (Result := 1).
The original code could be written in Smart Mobile Studio like this:
program Fibonacci;
uses
W3C.Console; // alternatively NodeJS.Core
function fib(n: integer): integer;
begin
if (n <= 2) then
Result := 1
else
Result := fib(n-1) + fib(n-2);
end;
begin
var Output: String;
for var i := 1 to 16 do
Output += IntToStr(fib(i)) + ', ';
Console.Log(Output + '...');
end.translates to:
function fib(n) {
var Result = 0;
if (n<=2) {
Result = 1;
} else {
Result = fib(n-1)+fib(n-2);
}
return Result
};
var Output = "";
var i = 0;
for(i=1;i<=16;i++) {
Output+=fib(i).toString()+", ";
}
console.log(Output+"...");It can hardly translated to much shorter code, I'd say.
ielite likes this Like This
It looks great to me . Also, the SMS support was awesome. I only posted the problem yesterday and they were all over it
Hers's a few time from the SMS team - CWBudde about converting code that may help you depending on how you want to do it.
================================================================================
If you just want to use SMS as a pure Pascal to JS translator, I would suggest to start with an entirely empty project. You can even switch the CSS theme to 'none' in order to output the bare minimum. Also make sure you deselect 'Use main body' from the project options.
Once done this, the only event taken by the precompiled application will be the window.onload. And this will be the soonest thing you can access without supplying a custom HTML template.
If this is enough for you (mostly the case), you can continue to create all elements by hand. To get the lowest possible access you have to avoid all SmartCL.* and most System.* units to only use the W3C APIs. With these you can access everything you can do with JS and most of the time you even don't need to use asm.
For example you can hook other events like this:
uses
W3C.DOM, W3C.HTML5;
window.addEventListener('resize', lambda
// put your resize event handler code in here
end);
You might note that writing an SPA takes much longer with this approach, but you're as close to metal you can get.
re:That seems to be the key advantage which makes this less of an ideal fit for P2, which is still a Microcontoller, not a JS engine.
Correct. However, When you use a Prop chip in something like a home automation controller and want to control it remotely, your best option is JavaScript running in a browser or you can pair the Prop with another board such as a Raspberry Pi and use the Pi for the Java Script client. Either way you it would be best to do it with web standards so you have to figure out how to interface it via JavaScript, JSON, Node.js etc.
Oh, yes, for cross-platform use, where some code runs in a Browser, it makes more sense.
Makes better sense still, if a native code Pascal is also available on P2, so you can split your Host/P2 boundary (or Binary/script boundary if you prefer) somewhat at will.
But now, what if the valid range of some quantity in my program is an integer in the range 3 to 29 inclusive?
I'll admit, I never have use cases like that.
Closest I have come to that, is where I use a Binary-AND as a fast modulus. eg for a range of 0..31
That is a simple way to guarantee you never get out-of-bounds writes, - the worst a bad index can do, is go to the wrong place in the allocated array, which is relatively easy to debug.
There are also enums, which can be member-tested, or Sets.
Ada allows for defining such types and automatically type checking them at compile time and range checking them at run time.
Yeah, normally we just write our own range checking into our code and move on.
- but that range check assumes the Error has 'somewhere to go' - in a MCU, that is often not the case.
Hence my comment about writing code that is not 'range checked' but rather, is 'range bound', so by design it cannot go outside the range.
This removes the need to have 'somewhere to go' for the error trap.
Amazing. Not the actual "week day" thing but the general idea of odd looking ranges, a.k.a. types, turns up in programming all the time. Even on the smallest systems.
But, yeah, as a I say normally we write our own checks and limits into the code where needed and think no more of it.
...but that range check assumes the Error has 'somewhere to go' - in a MCU, that is often not the case.
Sure it has somewhere to go. It can HALT and just refuse to proceed any further. It can trigger a reset. If either of those make sense depends on the application.
The idea here is that if any such unexpected error occurs, or more correctly "exception", then your program is in an indeterminate state. To continue processing is just going to create more "wrongness". Not good.
Hence my comment about writing code that is not 'range checked' but rather, is 'range bound', so by design it cannot go outside the range.
This removes the need to have 'somewhere to go' for the error trap.
This worries me. Let's consider an example:
We have an array with 32 elements. Indexed 0 to 31. An error occurs that tries to access element 33. We have ANDed the index with FFFF to ensure no array bound violation. BUT we are now using array element 1. Basically selected at random. Our program is an in an indeterminate state!
I do agree, though, enforcing such bounds does help while debugging as it means random other data and code does not get overwritten when things go wrong.
Yeah, it's a good way to go when you can use machine attributes and simple means to design away problem cases.
Masking is pretty robust, if you can work with powers of two.
The alternatives are a lot more types, or write unit tests, or....
On a MCU, given they are resource constrained, spending the time, or making the compromise to use the "built in types" you get with word sizes and booleans makes a ton of sense. But, as the task requirements grow, that only works well, until it doesn't. At that point, it's gonna be some kind of types or tests and test cases, etc... right?
Oh yeah, this strongly typed vs weakly typed or typeless debate has been going on for decades.
Ada is one extreme example of typing and checking everything. The programming community pretty much totally rejected it. Now, even safety critical code, for which Ada was designed, is written in C++ (See Joint fighter project and others)
Javascript is one extreme example of typlessness. It will try hard to convert anything to anything automatically to make some operator work. The programming community pretty much totally rejected JS for decades. It's only fit for dumb Smile web developers responding to mouse clicks on their web pages right?
In the middle we have C. It has types, mostly to make things efficient, it does some type checking sometimes. Programmers loved it. Hence Unix and all it's applications were written in C. Pretty much all embedded systems were written in C.
I won't go to the nightmare that is C++, you can bend C++ to your will. if you are skillful enough.
The alternatives are a lot more types, or write unit tests, or....
My conclusion after decades of working with avionics systems, military systems, machine control etc is:
If you want to be sure your code works you will be testing it thoroughly. If you have not exercised every execution path. Every input case, every edge case, every edge case of the edge cases....
Ergo, as you are doing all that testing anyway, then having types and Smile obfuscating your source code does not help. You have to do the work anyway.
At the boundaries to your system you need to validate inputs. I have seen that strictly typed Ada stuff fail there. People just assume, it's OK the compiler and run time will catch it.
Program correctness is dependent on a lot more than types and type checking.
One of the most error free systems I ever worked on, in avionics, had no types. Or perhaps I should say it only had one type, integer, if I remember correctly.
Why was it so error free? I can only imagine that simplicity rules:
1) It only had one type so you always knew exactly what any piece of code you were looking at did immediately, without having to remember a lot of type conversion rules or find out what the operator overloads did.
2) It had no way to create loops. No while, repeat, do until, goto etc. Your module started at the top, did whatever it did, and ended at the bottom. This make the logic of any module very easy to think about.
3) Item 2 makes it very easy to test the code. Unit tests just had to set up the inputs, run it, see what happens. It was very easy to create test cases that exercised every possible path way though the code and all kind of edge cases of data. It was very easy to create code coverage tools.
4) Item 2 makes it very easy for the compiler to report exactly the maximum execution time of a module. Which it did. Very important for real-time systems like flight controls and engine management systems. I have never seen a compiler that can do that simple thing since.
The language? Lucol. Almost impossible to find any description of it on the net today.
By contrast, the seriously type checked Ada systems I have seen could not make such guarantees. I Once was involved in testing an Ada flight control system. I noticed the main work took between 50 and 90% of it's allocated 100ms time budget. Seemingly randomly. When I asked the devs how I could verify it would never exceed 100%, ie fail catastrophically, they had no idea!
Yeah, you described that thing before. It's interesting, and I've been drawn back to it in my thinking from time to time.
All the meaningful action is in the data. The actual execute path is well defined, unchanging. That moves a lot of the checking to the data itself.
I kind of want to explore that idea on P2. Seems to me, defined blocks of instructions that operate on defined data could allow for a sort of modular program that works in a way that is similar to that system. The max time on any block would be known, and each block could be tested. Dump 'em in there, and setup the data accordingly, then have COGS just rip through the blocks, one by one, etc... Maybe a fun curio one day.
Ahh, so that is what all that fuss is about. I don't know a thing about Haskell. Maybe now I know just enough to think about knowing something about Haskell.
It may prove useful in an academic sense. Frankly, this chip is looking to be sort of a playground in that respect. The Pascal to JS discussion in progress is another aspect of that.
Lots of different computing modes are possible, and having the COG - HUB memory spaces makes for some interesting idea fodder.
I suspect, once we get the real deal out, a lot of interesting things can be taught on this chip. It will have interrupts, COG code, HUB code, shared memory, etc... A little VM, that can perform address, read, write trapping won't be super fast, but could do a lot of things needed to support the education.
OK. I'm not up to speed with the "functional programming" idea.
On the face of it, it sounds great. Put data into some function, get some result data out. No matter when you do this, or how many times you do this, it's always the same, very predictable and easy to reason about. Functions, or programs, never have any side effects on other data and never store anything internally.
Very mathematical, like a sin(a) function that always gives the sine of "a" no mater what.
That means you can test it very easily. As in the Lucol I talked about.
But, wait a minute, real programs need state, a memory of what has happened. How do I make a counter, for example, in a functional programming style? It's not possible.
Unless, I have a function like n2 = count(n1) which takes a counter value and returns the next value of the counter.
The answer seems to be that all the real inputs and whatever current state you have become inputs to your program. Which in turn returns it's outputs and the next state of the system.
There is a simple way to look at "functional programming" that appeals to me.
Consider a typical shoot'm'up three dimensional game.
60 times per second that game delivers a whole pile of game state to a graphics engine, and hence to OpenGL or whatever, and hence to a GPU, it gets drawn on the screen.
All of that hugely complicated stuff is functional programming.
The game engine at the top of the tree, driving all that, is not written in a functional programming style, it has state to maintain after all.
I'll need to do some thinking on it all one day. Right now, I'm in PASM land... Can't do it, or I'm gonna break flow.
Having native, big model assembly now is just fun. I've got some chunks in the COG, P1 style. They just work.
Now we have the ability to just drop stuff into HUB, data, code, and run or use it more directly. Big constants allow for inline masks, etc... it's nice. A bit more complex, but still accessible enough.
I'm gonna have to give C a play, but not for a bit. Lots of new instructions, toys to play with first.
I suppose it's back to you guys Pascal to JS discussion. Will be interesting to see language options play out on this thing.
That Lucol language's concept sounds like some key to the future. Indeterminant timing is the bane of embedded-system programming, and the world is not moving towards any solution, but only further into unpredictability. Maybe someday a plane will crash because it wasn't served a web page in time.
Lucol was totally brilliant for it's simplicity, timing determinism, and ease of analysis. Rock solid.
I don't know about "key to the future". That was two or three decades ago. Lucol is extinct now. As far as I know it was an in house language and only ever used within that one company.
Programmers pretty much all hated it, "What, no loop constructs, that's nuts...and where are my pointers...and why can't I write recursive functions....". Only a few of them seemed to really understand why it was designed that way, with so many restrictions, and appreciate the brilliance of it. They all wanted contracts writing C++ and Java!
Comments
But now, what if the valid range of some quantity in my program is an integer in the range 3 to 29 inclusive?
Oops now I'm back to having to check that range in my code in order to ensure a valid program.
Pascal does not do that. Might as well use a type free language if I have to do all that work manually.
Ada on the other hand does allow one to define such types, and the compiler checks all of that.
Great, but it's such a pain to use that nobody does any more.
Pascal is pointless whichever way you look at it.
I agree to a large extent. However it turns out that very low power, physically small and cheap solutions can be created on a micro-controller using JS. See Espruino.
Of course the "hard parts" are done in C/C++ for efficiency by somebody skilled in the arts. That only needs to be done once. After that a billion people can use it with the simplicity of JS.
What's not to like about it?
re:Yeah, I think we need to import "crt" or whatever it is that defines the I/O functions-
I can't remember the syntax for that. "import", "include", "with", "use", "require" ?
"uses" console from the W3C library
i.e. uses
W3C.Console
Also I commented out a few statements yesterday on your Fibonacci code yesterday and got most of it to compile . At least I now know which line's are causing the problem. Also, yesterday posted the problem on the SMS forum to see if they could spot the reason it won't compile.
Smart Pascal Web Version of BigFibonacci
http://www.pp4s.co.uk/main/prog-big-fibonacci-web.html
uses
System.Types, SmartCL.System, SmartCL.Components, SmartCL.Application,
SmartCL.Game, SmartCL.GameApp, SmartCL.Graphics, uCrtCanvas;
re:That seems to be the key advantage which makes this less of an ideal fit for P2, which is still a Microcontoller, not a JS engine.
Correct. However, When you use a Prop chip in something like a home automation controller and want to control it remotely, your best option is JavaScript running in a browser or you can pair the Prop with another board such as a Raspberry Pi and use the Pi for the Java Script client. Either way you it would be best to do it with web standards so you have to figure out how to interface it via JavaScript, JSON, Node.js etc.
re:It seems like the world is making it more difficult to get applications to work together.
Actually the whole point for new standards such as the "WEB OF THINGS (WOT)" is making it easier for devices to communicate through web standards.
https://en.wikipedia.org/wiki/Web_of_Things
If you look at the Open Systems Interconnect(OSI) model for the Web OF Things you will see OSI layer 7 which is HTTP, JSON, Web Sockets which is your every day web standards.
To see how the web of things work's for the Raspberry PI you can study this source code:
https://github.com/webofthings/wot-book
The Raspberry Pi model can be used as a starting template for the Prop.
Web Of things in Action(live Raspberry Pi , Camera , etc)
http://devices.webofthings.io/
Micro-controllers like the ESP whatever WIFI devices are running Javascript. Micro-controllers are programmed in JS, Python, Java. Tiny little machines are running full up operating systems like Linux.
The Propeller is not going to make a useful JS engine. But the P2 might. With the bonus that even if the JS is slow you still have 15 other cores for that good old fashioned real-time, micro-controller stuff.
Now about this SMS thing...
If it can't convert regular Pascal to JS that would be a bit of a problem. Nobody is writing new Pascal but they might have legacy code they would like to use.
That big-fibonacci program is pretty neat. It's not the recursive fibo though.
re:The Propeller
If I said Prop above I actually meant Prop 2 we already know that Prop1 can't do the job . I'll play with the Fibonacci code again on the weekend. There is only one line that seems to be holding it up now and could even be a compiler setting thing.
However we did manage to compile TinyJS with prop-gcc and I believe somebody even managed to get it to run on a Prop I with external RAM attached.
TinyJS is the small and simple fore runner of Espruino. Totally impractical but a bit of fun.
If it can't convert regular Pascal to JS that would be a bit of a problem. Nobody is writing new Pascal but they might have legacy code they would like to use.
We have an answer from the Smart Mobile team - gabr42
The reason for this error is quite simple - DWScript doesn't support assigning result to function by assigning it to its name (fib := 1). You have to assign it to the pseudo-variable Result (Result := 1).
re: Pascal to JS
re:Hey, that's looking pretty good.
It looks great to me . Also, the SMS support was awesome. I only posted the problem yesterday and they were all over it
Hers's a few time from the SMS team - CWBudde about converting code that may help you depending on how you want to do it.
================================================================================
If you just want to use SMS as a pure Pascal to JS translator, I would suggest to start with an entirely empty project. You can even switch the CSS theme to 'none' in order to output the bare minimum. Also make sure you deselect 'Use main body' from the project options.
Once done this, the only event taken by the precompiled application will be the window.onload. And this will be the soonest thing you can access without supplying a custom HTML template.
If this is enough for you (mostly the case), you can continue to create all elements by hand. To get the lowest possible access you have to avoid all SmartCL.* and most System.* units to only use the W3C APIs. With these you can access everything you can do with JS and most of the time you even don't need to use asm.
For example you can hook other events like this:
uses
W3C.DOM, W3C.HTML5;
window.addEventListener('resize', lambda
// put your resize event handler code in here
end);
You might note that writing an SPA takes much longer with this approach, but you're as close to metal you can get.
Oh, yes, for cross-platform use, where some code runs in a Browser, it makes more sense.
Makes better sense still, if a native code Pascal is also available on P2, so you can split your Host/P2 boundary (or Binary/script boundary if you prefer) somewhat at will.
I'll admit, I never have use cases like that.
Closest I have come to that, is where I use a Binary-AND as a fast modulus. eg for a range of 0..31
That is a simple way to guarantee you never get out-of-bounds writes, - the worst a bad index can do, is go to the wrong place in the allocated array, which is relatively easy to debug.
There are also enums, which can be member-tested, or Sets.
Ada allows for defining such types and automatically type checking them at compile time and range checking them at run time.
Yeah, normally we just write our own range checking into our code and move on.
- but that range check assumes the Error has 'somewhere to go' - in a MCU, that is often not the case.
Hence my comment about writing code that is not 'range checked' but rather, is 'range bound', so by design it cannot go outside the range.
This removes the need to have 'somewhere to go' for the error trap.
But, yeah, as a I say normally we write our own checks and limits into the code where needed and think no more of it. Sure it has somewhere to go. It can HALT and just refuse to proceed any further. It can trigger a reset. If either of those make sense depends on the application.
The idea here is that if any such unexpected error occurs, or more correctly "exception", then your program is in an indeterminate state. To continue processing is just going to create more "wrongness". Not good. This worries me. Let's consider an example:
We have an array with 32 elements. Indexed 0 to 31. An error occurs that tries to access element 33. We have ANDed the index with FFFF to ensure no array bound violation. BUT we are now using array element 1. Basically selected at random. Our program is an in an indeterminate state!
I do agree, though, enforcing such bounds does help while debugging as it means random other data and code does not get overwritten when things go wrong.
Masking is pretty robust, if you can work with powers of two.
The alternatives are a lot more types, or write unit tests, or....
On a MCU, given they are resource constrained, spending the time, or making the compromise to use the "built in types" you get with word sizes and booleans makes a ton of sense. But, as the task requirements grow, that only works well, until it doesn't. At that point, it's gonna be some kind of types or tests and test cases, etc... right?
Oh yeah, this strongly typed vs weakly typed or typeless debate has been going on for decades.
Ada is one extreme example of typing and checking everything. The programming community pretty much totally rejected it. Now, even safety critical code, for which Ada was designed, is written in C++ (See Joint fighter project and others)
Javascript is one extreme example of typlessness. It will try hard to convert anything to anything automatically to make some operator work. The programming community pretty much totally rejected JS for decades. It's only fit for dumb Smile web developers responding to mouse clicks on their web pages right?
In the middle we have C. It has types, mostly to make things efficient, it does some type checking sometimes. Programmers loved it. Hence Unix and all it's applications were written in C. Pretty much all embedded systems were written in C.
I won't go to the nightmare that is C++, you can bend C++ to your will. if you are skillful enough. My conclusion after decades of working with avionics systems, military systems, machine control etc is:
If you want to be sure your code works you will be testing it thoroughly. If you have not exercised every execution path. Every input case, every edge case, every edge case of the edge cases....
Ergo, as you are doing all that testing anyway, then having types and Smile obfuscating your source code does not help. You have to do the work anyway.
At the boundaries to your system you need to validate inputs. I have seen that strictly typed Ada stuff fail there. People just assume, it's OK the compiler and run time will catch it.
Program correctness is dependent on a lot more than types and type checking.
Besides, that also allows really clever stuff to work, right along with work horribly. That's what the tests are for!
Why was it so error free? I can only imagine that simplicity rules:
1) It only had one type so you always knew exactly what any piece of code you were looking at did immediately, without having to remember a lot of type conversion rules or find out what the operator overloads did.
2) It had no way to create loops. No while, repeat, do until, goto etc. Your module started at the top, did whatever it did, and ended at the bottom. This make the logic of any module very easy to think about.
3) Item 2 makes it very easy to test the code. Unit tests just had to set up the inputs, run it, see what happens. It was very easy to create test cases that exercised every possible path way though the code and all kind of edge cases of data. It was very easy to create code coverage tools.
4) Item 2 makes it very easy for the compiler to report exactly the maximum execution time of a module. Which it did. Very important for real-time systems like flight controls and engine management systems. I have never seen a compiler that can do that simple thing since.
The language? Lucol. Almost impossible to find any description of it on the net today.
By contrast, the seriously type checked Ada systems I have seen could not make such guarantees. I Once was involved in testing an Ada flight control system. I noticed the main work took between 50 and 90% of it's allocated 100ms time budget. Seemingly randomly. When I asked the devs how I could verify it would never exceed 100%, ie fail catastrophically, they had no idea!
All the meaningful action is in the data. The actual execute path is well defined, unchanging. That moves a lot of the checking to the data itself.
I kind of want to explore that idea on P2. Seems to me, defined blocks of instructions that operate on defined data could allow for a sort of modular program that works in a way that is similar to that system. The max time on any block would be known, and each block could be tested. Dump 'em in there, and setup the data accordingly, then have COGS just rip through the blocks, one by one, etc... Maybe a fun curio one day.
And that is the major concept in the "functional programming" paradigm. See https://wiki.haskell.org/Functional_programming for example.
With or without types functional programming makes reasoning about, and testing program correctness a lot easier. Well, actually possible.
They are not so hot on the time bounds checking thing though.
I have also contemplated creating a Lucol like programming environment for the Prop. Not sure I'm up to it or who would care.
It may prove useful in an academic sense. Frankly, this chip is looking to be sort of a playground in that respect. The Pascal to JS discussion in progress is another aspect of that.
Lots of different computing modes are possible, and having the COG - HUB memory spaces makes for some interesting idea fodder.
I suspect, once we get the real deal out, a lot of interesting things can be taught on this chip. It will have interrupts, COG code, HUB code, shared memory, etc... A little VM, that can perform address, read, write trapping won't be super fast, but could do a lot of things needed to support the education.
On the face of it, it sounds great. Put data into some function, get some result data out. No matter when you do this, or how many times you do this, it's always the same, very predictable and easy to reason about. Functions, or programs, never have any side effects on other data and never store anything internally.
Very mathematical, like a sin(a) function that always gives the sine of "a" no mater what.
That means you can test it very easily. As in the Lucol I talked about.
But, wait a minute, real programs need state, a memory of what has happened. How do I make a counter, for example, in a functional programming style? It's not possible.
Unless, I have a function like n2 = count(n1) which takes a counter value and returns the next value of the counter.
The answer seems to be that all the real inputs and whatever current state you have become inputs to your program. Which in turn returns it's outputs and the next state of the system.
It's all about the data, as you said.
There is a simple way to look at "functional programming" that appeals to me.
Consider a typical shoot'm'up three dimensional game.
60 times per second that game delivers a whole pile of game state to a graphics engine, and hence to OpenGL or whatever, and hence to a GPU, it gets drawn on the screen.
All of that hugely complicated stuff is functional programming.
The game engine at the top of the tree, driving all that, is not written in a functional programming style, it has state to maintain after all.
Having native, big model assembly now is just fun. I've got some chunks in the COG, P1 style. They just work.
Now we have the ability to just drop stuff into HUB, data, code, and run or use it more directly. Big constants allow for inline masks, etc... it's nice. A bit more complex, but still accessible enough.
I'm gonna have to give C a play, but not for a bit. Lots of new instructions, toys to play with first.
I suppose it's back to you guys Pascal to JS discussion. Will be interesting to see language options play out on this thing.
Lucol was totally brilliant for it's simplicity, timing determinism, and ease of analysis. Rock solid.
I don't know about "key to the future". That was two or three decades ago. Lucol is extinct now. As far as I know it was an in house language and only ever used within that one company.
Programmers pretty much all hated it, "What, no loop constructs, that's nuts...and where are my pointers...and why can't I write recursive functions....". Only a few of them seemed to really understand why it was designed that way, with so many restrictions, and appreciate the brilliance of it. They all wanted contracts writing C++ and Java!