I wonder if that is hard won institutional knowledge? Classic MacOS was renowned for it's memory management without VM - which had the same block shuffling hallmarks of garbage collection.
Where did those 64 bit variables come from? I did not think they were on the table.
How does SQRT know if myval1 is an integer or a floating point type?
64 bit results from cordic 32x32 multiply and cordic divide is 64/32 bits.
The Cordic square root function is 64 bit integer to 32 bit. Floating point was never mentioned.
If the vector assignment thing evolves a little further you will eventually converge to C structs. This would allow something like:
new := rotate(x, y, angle)
where new_x and new_y are elements of the struct new. Using the "_" character as a struct delimiter may not be idea since it would interfere with using it in variable names. Maybe some other character could be used.
BTW, I think most of the new features in P2 could be defined as intrinsic functions.
mul, div etc may well work with 64 bits somehow. That is far away from introducing 64 bit longs into the Spin language.
Float is already in the Spin language. As floating point literals. It's just that Spin does noting special with them. They are just weird bit patterns in 32 bit spaces. I though new Spin was going to make use of float somehow.
That's just two of the three return syntaxes allowed by Spin1. I wasn't sure how to blend the result syntax into the mix. Maybe result[0] and result[1]?
One thing that's critical is that the compiler be able to determine unequivocally how many results are being returned and how many results are being expected by all calls to the method, so that it can flag an error if they don't match. Ambiguity might occur if you allow stuff like result[i++]. Then you might have to resort to syntax like this go allocate result;
PUB move2(x, y, dx, dy)[2]
This could also be used with named result variables, thus:
Quite so. But they are consistent with list syntaxes used elsewhere. Also, your quibble about calling it a vector assignment has some merit. How about "list assignment" instead?
Methods that return lists work pretty simply when used with assignments, but how about when used in expressions? Are we also going to allow vector arithmetic?
... where new_x and new_y are elements of the struct new. Using the "_" character as a struct delimiter may not be idea since it would interfere with using it in variable names. Maybe some other character could be used.
A struct may be overkill where an array is sufficient. Maybe something like this to indicate:
One thing that's critical is that the compiler be able to determine unequivocally how many results are being returned and how many results are being expected by all calls to the method, so that it can flag an error if they don't match.
Quite so. In order to check for errors and generate code the compiler needs to know how many inputs a function has and how many outputs it produces. To continue that idea the compiler might also need to know if those inputs are BYTE, WORD, LONG or possibly even FLOAT. Same for the outputs.
Generally that has been taken to mean that the language has to support some kind of declaration syntax where all these details are spelled out.
Before you know it you are reinventing C++ or Ada. God forbid!
How about a different approach? :
What if the compiler comes across this in my code:
x, y := someFunc(a, b, c)
No probs, we can deal with that. someFunc() has three inputs and two outputs. Then later on the compiler comes across this in my code:
x, y := someFunc(a, b)
Oops, that's different.
At this point the compiler could bail with an error. Or it could assume a default value for the missing "c" input. Or it could assume there are two different someFunc() functions each with different numbers of inputs.
Note: So far the compiler has not even found a definition of someFunc() which might turn out to look like:
PUB someFunc(a, b, c, d)
return (x, y)
Opps, this function does not match with any of the calls found so far. Time to bail with an error. Or continue until we find something that fits better.
I think what I am hinting at here is called "duck typing" now a days. If it walks like a duck and quacks like a duck it probably is a duck.
At least it is some kind of function type inference.
The neat thing about this approach is that it does not require all that heavy weight and ugly type declaration syntax.
... the compiler might also need to know if those inputs are BYTE, WORD, LONG or possibly even FLOAT.
Spin1 converts everything to longs in expressions and function call args and returns. I see no reason to change that. At some point, the compiler has to let go of the programmer's hand, after all, for the sake of simplicity.
I think what I am hinting at here is called "duck typing" now a days. If it walks like a duck and quacks like a duck it probably is a duck.
Sure, why not go even further? In Perl, every function call can be made in scalar or list context, implied by the call's syntax. A function might return a scalar (e.g. the length of the return list) in scalar context, or the list itself in list context. There are even provisions for functions to check in which context they were called, so they can return the right kind of value.
I see no reason to change the upgrade of everything to longs either.
Except....
Spin has some idea of floats in it's syntax already. In literals.
I always assumed that idea of floats was going somewhere. Perhaps it's not. If it does surely a distinction between a 32 bit int and 32 bit float needs to be made.
Glad you are kidding. But perhaps such compile time "duck typing" is not so crazy.
Okay, given that, should mixed-mode expressions cause an error (requiring explicit int() and flt() conversions, or should longs be promoted to floats automatically in mixed-mode operations? What about converting a float value to a long when the value is too big? (I assume that losing precision going the other way is just a given.) Are we in an area where run-time errors have to be flagged?
BTW, In ten+ years programming in Spin, I don't think I've ever used the floating-point libraries. I feel like I have more control over the results using longs and just scaling where I need to. I guess, if one is lazy or in a hurry and doesn't care about the accuracy of his/her results, floating-point can be a convenience.
No idea. I always though the idea of byte size, word size, long size, double size, signed and unsigned integers then float this and float that in languages like C, C++, Java, C# was brain damaged. What we want is numbers.
BTW, In ten+ years programming in Spin, I don't think I've ever used the floating-point libraries. I feel like I have more control over the results using longs and just scaling where I need to. I guess, if one is lazy or in a hurry and doesn't care about the accuracy of his/her results, floating-point can be a convenience.
The only time I got into floats on the Prop was to satisfy the ZPU emulator and it's C compiler. Just for fun.
As I like to quote my old project manager from the early 1980's when he explained to a new greenhorn graduate on our project team:
"If you think you need floating point to solve the problem, then you do not understand the problem. If you really do need floating point to solve the problem then you have problems you do not understand."
We were using a language that understood fixed point at the time
I personally am neutral to slight negative about adding floats to SPIN. In the vast majority of cases, they are not necessary.
As a user, I agree. But from a marketing standpoint, and as a Spin enthusiast, I'd like to see native floating point in Spin2, just so C doesn't enjoy a perceived advantage in that area.
As a user, I agree. But from a marketing standpoint, and as a Spin enthusiast, I'd like to see native floating point in Spin2, just so C doesn't enjoy a perceived advantage in that area.
It is not just marketing - the P2 has Cordic support, so it is a 'Silicon Access' issue.
Any language that falls short of full silicon access, is clearly an incomplete language for P2 design.
Practical constraints may well mean floating point has a user-switch, but the feature needs to be possible.
As a user, I agree. But from a marketing standpoint, and as a Spin enthusiast, I'd like to see native floating point in Spin2, just so C doesn't enjoy a perceived advantage in that area.
It is not just marketing - the P2 has Cordic support, so it is a 'Silicon Access' issue.
Any language that falls short of full silicon access, is clearly an incomplete language for P2 design.
Practical constraints may well mean floating point has a user-switch, but the feature needs to be possible.
That is possible right now. SPIN handles the integers used to feed the CORDIC, and users have two ways to access it, one being through individual assembly instruction procedures, the other being in-line assembly to just get it done quickly.
We've got the silicon access. It won't be hard.
The approach you advocate was done on P1, and a whole lot of that went unused. It made a ton of sense though, due to P1 lacking HUBEXEC.
Now that we can in-line, keeping SPIN lean makes a lot of sense. Doing that again for P2, which has a lot more features, doesn't seem to make a lot of sense. One of the design goals is to make lower level programming accessible to more people. What we saw on P1 was people would stick with SPIN, use objects, and or write some PASM, depending.
Taking that same path will be easier on P2, as one isn't forced to write for a whole COG.
One other thing that occurs to me:
There is a lot of hardware there. How it gets used will be very interesting to see. Some of the common cases are as designed, and we know those, but the core of SPIN could get a lot more complicated having to include high level support for all of it.
Makes it bigger, slower, more difficult to target for on-chip, etc...
Throughout this whole process, it's kind of been implied that these tools are the core, a basis. Much more will follow on, once we know it's a lock and working silicon is a reality.
Synthesis next. Some hurdles to get past yet.
So the other way to look at this is having a small, lean, but fully functional SPIN makes a ton of sense! It will be possible to do it all, and that's good. Not everyone will want it that way, and so there is a considerable vacuum for other tools to follow.
I think it needs to happen this way due to the unique features of the P2. It's gonna take a while to sort out the right kinds of abstractions and or best ways to formalize some features. To see that, we are gonna need to see people writing a lot of code to exercise what we've got right now.
Not doing it that way runs the very real risk of putting serious development behind features that go unused, like some things did on P1.
And that gets in the way of production too. We don't really need that stuff. SPIN with in-line PASM will be more than enough to do real work on the chip, other, bigger, better, whatever tools can follow as demand, understanding happen.
Comments
Neat.
Methods could return any number of parameters.
-Phil
Yes! And that would get around the need to pass pointers in many cases.
I appreciate the cool features but have reservations about your proposed syntax: You are using array access syntax, "[]", to make a function call. Why not the normal "[]"?
Where did those 64 bit variables come from? I did not think they were on the table.
How does SQRT know if myval1 is an integer or a floating point type? Again using array syntax for a function call. It is not clear what is an input and what is an output of this function.
This would be better: Multiple returns as done in Lua.
"Multiple reture values" perhaps. The round braces are redundant. See Lua.
Call as:
The Cordic square root function is 64 bit integer to 32 bit. Floating point was never mentioned.
BTW, I think most of the new features in P2 could be defined as intrinsic functions.
mul, div etc may well work with 64 bits somehow. That is far away from introducing 64 bit longs into the Spin language.
Float is already in the Spin language. As floating point literals. It's just that Spin does noting special with them. They are just weird bit patterns in 32 bit spaces. I though new Spin was going to make use of float somehow.
One thing that's critical is that the compiler be able to determine unequivocally how many results are being returned and how many results are being expected by all calls to the method, so that it can flag an error if they don't match. Ambiguity might occur if you allow stuff like result[i++]. Then you might have to resort to syntax like this go allocate result;
This could also be used with named result variables, thus:
-Phil
Methods that return lists work pretty simply when used with assignments, but how about when used in expressions? Are we also going to allow vector arithmetic?
-Phil
A struct may be overkill where an array is sufficient. Maybe something like this to indicate:
-Phil
Generally that has been taken to mean that the language has to support some kind of declaration syntax where all these details are spelled out.
Before you know it you are reinventing C++ or Ada. God forbid!
How about a different approach? :
What if the compiler comes across this in my code: No probs, we can deal with that. someFunc() has three inputs and two outputs. Then later on the compiler comes across this in my code: Oops, that's different.
At this point the compiler could bail with an error. Or it could assume a default value for the missing "c" input. Or it could assume there are two different someFunc() functions each with different numbers of inputs.
Note: So far the compiler has not even found a definition of someFunc() which might turn out to look like: Opps, this function does not match with any of the calls found so far. Time to bail with an error. Or continue until we find something that fits better.
I think what I am hinting at here is called "duck typing" now a days. If it walks like a duck and quacks like a duck it probably is a duck.
At least it is some kind of function type inference.
The neat thing about this approach is that it does not require all that heavy weight and ugly type declaration syntax.
-Phil
'Just kidding, of course!
-Phil
Except....
Spin has some idea of floats in it's syntax already. In literals.
I always assumed that idea of floats was going somewhere. Perhaps it's not. If it does surely a distinction between a 32 bit int and 32 bit float needs to be made.
Glad you are kidding. But perhaps such compile time "duck typing" is not so crazy.
-Phil
BTW, In ten+ years programming in Spin, I don't think I've ever used the floating-point libraries. I feel like I have more control over the results using longs and just scaling where I need to. I guess, if one is lazy or in a hurry and doesn't care about the accuracy of his/her results, floating-point can be a convenience.
-Phil
As I like to quote my old project manager from the early 1980's when he explained to a new greenhorn graduate on our project team:
"If you think you need floating point to solve the problem, then you do not understand the problem. If you really do need floating point to solve the problem then you have problems you do not understand."
We were using a language that understood fixed point at the time
Fixed Point would be spiffy. I'm a fan. But that doesn't need to be an addition either.
-Phil
It is not just marketing - the P2 has Cordic support, so it is a 'Silicon Access' issue.
Any language that falls short of full silicon access, is clearly an incomplete language for P2 design.
Practical constraints may well mean floating point has a user-switch, but the feature needs to be possible.
That is possible right now. SPIN handles the integers used to feed the CORDIC, and users have two ways to access it, one being through individual assembly instruction procedures, the other being in-line assembly to just get it done quickly.
We've got the silicon access. It won't be hard.
The approach you advocate was done on P1, and a whole lot of that went unused. It made a ton of sense though, due to P1 lacking HUBEXEC.
Now that we can in-line, keeping SPIN lean makes a lot of sense. Doing that again for P2, which has a lot more features, doesn't seem to make a lot of sense. One of the design goals is to make lower level programming accessible to more people. What we saw on P1 was people would stick with SPIN, use objects, and or write some PASM, depending.
Taking that same path will be easier on P2, as one isn't forced to write for a whole COG.
One other thing that occurs to me:
There is a lot of hardware there. How it gets used will be very interesting to see. Some of the common cases are as designed, and we know those, but the core of SPIN could get a lot more complicated having to include high level support for all of it.
Makes it bigger, slower, more difficult to target for on-chip, etc...
Throughout this whole process, it's kind of been implied that these tools are the core, a basis. Much more will follow on, once we know it's a lock and working silicon is a reality.
Synthesis next. Some hurdles to get past yet.
So the other way to look at this is having a small, lean, but fully functional SPIN makes a ton of sense! It will be possible to do it all, and that's good. Not everyone will want it that way, and so there is a considerable vacuum for other tools to follow.
I think it needs to happen this way due to the unique features of the P2. It's gonna take a while to sort out the right kinds of abstractions and or best ways to formalize some features. To see that, we are gonna need to see people writing a lot of code to exercise what we've got right now.
Not doing it that way runs the very real risk of putting serious development behind features that go unused, like some things did on P1.
And that gets in the way of production too. We don't really need that stuff. SPIN with in-line PASM will be more than enough to do real work on the chip, other, bigger, better, whatever tools can follow as demand, understanding happen.