spin2cpp converts Spin to C++ (or C) and then compiles it with a C compiler, normally PropGCC. So all of those optimizations are available.
I was thinking OpenSpin targeted the actual Spin runtime, which seems more compact than the C/C++ interpreter. I've always wished that the Spin tool would optimize - so much existing code would benefit from it.
At the end of the day readability is preferable to typabily. Code is read more often than it is written.
But hey, it was only a "for instance"
That's $ on my keyboard. I guess that's the next problem challenge. Finding a readable character that is common to various character sets and keyboards.
I'd like to propose that any Spin extensions write method pointers as "PUB putstr(x)" instead of "PUB.putstr(x)", since it's more consistent with the syntax for method definition (and looks cleaner to me, and would be slightly easier for my parser (which thinks everything, including PUB block headers, is an expression) to parse).
I'd also like to propose that .+, .-, .*, ./, .^, .=<, etc. be used for floating point operators, instead of overloading standard Spin operators that want ints and creating a potential source of errors. That way, the can of worms that comes with operator overloads is avoided, and you won't run into trouble if you only put your optional types in some places. These .-prefixed operators are borrowed from OCaml, which makes you do all casts explicitly and has no overloads (a feature!). Also, then .^ can mean exponentiate; it certainly would be very confusing for ^ to mean xor for ints and exponentiate for floats.
I'm glad to hear you're still working on spin2cpp. Would there be any advantage to creating a spin2llvm? I've started looking at what is involved in writing an LLVM backend for the Propeller, both P1 and P2.
YES PLEASE! Then create LLVM code generators for the Propeller 1 and 2 and use common code generators for Spin and C! =O
@Electrodude, my personal preference is to overload the standard operators. The floating point variables will be typed, so the compiler will know exactly which form of the operator to use. Using types will allow for error checking, and help with the common problems that newbies have with pointers.
My only concerns about adding features to Spin is that we need to keep the language as simple as possible. A novice can learn Spin very quickly because it does not have a lot of features. I think any enhancements that are added should help to fix common programming errors. The compiler should check for common mistakes and warn the programmer.
@Electrodude, my personal preference is to overload the standard operators. The floating point variables will be typed, so the compiler will know exactly which form of the operator to use. Using types will allow for error checking, and help with the common problems that newbies have with pointers.
My only concerns about adding features to Spin is that we need to keep the language as simple as possible. A novice can learn Spin very quickly because it does not have a lot of features. I think any enhancements that are added should help to fix common programming errors. The compiler should check for common mistakes and warn the programming.
Types in Spin need to be left optional, since types don't even exist in standard Spin. Optional types combined with operator overloading seems to me like a recipe for disaster. It needs to be able to deal with floats that aren't typed.
What if you multiply two values that came from methods in an external object that you didn't write and would rather not modify and that is written in standard Spin (and hence has no declared return value type)?
OBJ
o : "StandardSpinObject"
PUB func
return o.foo * o.bar
How is the compiler supposed to know whether o.foo and o.bar return floats or ints? The programmer knows, and the comments (that the compiler can't read) in StandardSpinObject.spin probably specify, but the compiler doesn't know. Since it doesn't know, it has no way of knowing if it should use f.fmul or integer "*". If the programmer uses an editor macro to replace all instances of "f.fmul(a, b)" with "a * b", he'll be left with a lot of bugs. On the other hand, if he replaces f.fmul(a, b) with a *. b, it will work fine. The only other alternative would be filling your code with a lot of ugly casts.
"(float)x * (float)y" is a lot uglier than "x *. y". Also, the first is a potential source of confusion, since it might not be obvious to everyone wheter that first "(float)" applies to just "x" or to the whole expression "x * (float)".
Extended Spin should change nothing in standard Spin. It should accept a standard Spin file and emit output with no warnings that does exactly the same thing (but is maybe optimized). It should not change the meaning of any already-valid Spin constructs. Any new Extended Spin features should not compile with a standard Spin compiler.
Look into OCaml's type system. Not the part about variants and matches and such, just the part about type inference. I've never really needed to use OCaml and don't know it very well, but its type system seems really well designed. AFAIK, you never need to define function argument types, and yet it figures them out anyway by looking at what operators and functions the arguemnts are fed to and where the return value comes from. That's what I want to do (where types are defined, that is). I think we might be able to find a way to add that to Spin without breaking backwards compatibility.
(I just realized that OCaml actually uses "*.", "+.", etc. and not ".*", ".+", etc. I don't really prefer one over the other, or if something entirely different (that doesn't overload standard Spin operators) is used.)
Any code that uses "*." would have to be new code, since Spin currently doesn't support such operators. So it would be just as valid to use types as would be to used "*." type operators in new code. New code that uses types can interoperate with old code by relaxing the type requirement. That is, you could have an option to not generate warnings if types don't match, or if an object doesn't use types. However, any new code that want to do a floating point multiply must declare the variables as float.
Spin already has an implicit type associated with constants defined in a CON section. The are either float or long. You cannot mix floats with longs in constant expressions. They are either float or long. So Spin is already overlaying the operators in CON sections.
Any code that uses "*." would have to be new code, since Spin currently doesn't support such operators. So it would be just as valid to use types as would be to used "*." type operators in new code. New code that uses types can interoperate with old code by relaxing the type requirement. That is, you could have an option to not generate warnings if types don't match, or if an object doesn't use types. However, any new code that want to do a floating point multiply must declare the variables as float.
I agree, much better to have code that is clearer, and the type declares can manage the details.
Users may also want to harvest pieces of code, in some cases coming from other languages, and adding special operators certainly does not assist there.
I'd be in favor of operator overloading. This could be done in three ways:
1) Automatically promote operands in mixed-mode operations to the highest type between the two before the operation, optionally allowingfloat and int (à la MS Basic). (Spin already promotes bytes and words to longs pre-op.)
2) Same as #1, but issue warnings for mixed-mode operations.
3) Forbid mixed-mode operations, requiring the use of float and int where necessary (à la Fortran).
I'm just thinking about what would be the easiest to teach my students. Thus far, I've told them that Spin doesn't do floating point, just to avoid the FP libraries' function-call mania. For that, I'd lean towards #1, with pragmata available to revert to #2 or #3, similar to Perl's use strict.
I think I could live with overloading as long as mixed-mode operations are forbidden or at least warned about (PhiPi's #3 or #2). As Dave Hein pointed out, Standard Spin already sort of has operator overloading, and where it does have, it completely forbids mixed operations.
Allowing mixed operations/automatic type conversions would make my type-inference plans hopeless.
I'm still going to implement explicit operators ("*." and friends) in my compiler (as well as overloads), so that float operations can be done without needing to explicitly declare types.
Allowing mixed operations/automatic type conversions would make my type-inference plans hopeless.
I don't think it makes it hopeless. If the compiler knows the types of x and y then it knows the type of x+y, regardless of whether x or y needs promotion. So I don't think it really changes the difficulty, as long as the promotion depends only on the types of the inputs.
Or promotion could be done explicitly like it is in the CON section. Spin uses the float, trunc and round instrinsics to do this. It seems to me it would make thinks simpler if the CON and PUB sections use the same syntax.
In a CON section you can do the following.
CON
X = 1.2
Y = 3.5
Z = X/Y
I = round(Y/2.0)
J = trunc(X*3.5)
However, you can't mix longs and floats, such as:
CON
K = 2 * 3.4
Of course, implicit promotion isn't that hard to implement. Something like "2*3.4" would just be a float value of 6.8. I assume in general any mixed expression would just be promoted to float.
The only mixed-mode expressions where I'd not do any promoting are the bitwise logic ops. Although it's hard to imagine what 4 & 3.77 even means, there could be a case where someone might be interested in the raw FP bits.
In the same vein, there should be pseudo-functions available to turn an integer into a float and vice versa without changing the bits, say forceint and forcefloat.
Or maybe bitwise ops should be illegal on float values, unless a forceint is applied first. One particularly onerous expression that should definitely trigger an error would be x << 3.5. Or do you simply demote the 3.5 to an integer 3 first?
Lastly, forcefloat would be absolutely necessary for any Spin methods that construct floating-point values the way the Spin-based floating-point library objects do now.
... Although it's hard to imagine what 4 & 3.77 even means, there could be a case where someone might be interested in the raw FP bits.
It would be common to split a float into 4 bytes to transfer over a serial link, for example, but most would be ok with a pseudo-function if that was needed in a strict case.
Or promotion could be done explicitly like it is in the CON section. Spin uses the float, trunc and round instrinsics to do this. It seems to me it would make thinks simpler if the CON and PUB sections use the same syntax.
In a CON section you can do the following.
CON
X = 1.2
Y = 3.5
Z = X/Y
I = round(Y/2.0)
J = trunc(X*3.5)
However, you can't mix longs and floats, such as:
CON
K = 2 * 3.4
Exactly. All type conversions are currently done explicitly, and they should remain that way.
I can't really argue against operator overloading if Standard Spin already supports it in constants.
Allowing mixed operations/automatic type conversions would make my type-inference plans hopeless.
I don't think it makes it hopeless. If the compiler knows the types of x and y then it knows the type of x+y, regardless of whether x or y needs promotion. So I don't think it really changes the difficulty, as long as the promotion depends only on the types of the inputs.
Automatic type conversions would mean return type inference would still be easy, but inferring argument types would become hopeless.
Allowing any overloading at all makes causes problems for my type inference. Consider the following:
PUB multiply(x, y)
return x * y
With overloading, the compiler has no idea whether x, y, and result in my original multiply function are all ints or all floats.
With no overloading and without explicitly specifying any types, the compiler knows that x, y, and result must be integers. If you do "myfloatvar := multiply(1.0, 2.0)", the compiler would warn that you're passing floats to a function that expects ints and assigning an int to a float. If you want a float version, you can write:
PUB multiplyf(x, y)
return x *. y
And the compiler will know that x, y, and result are all floats.
The most sensible solution I can think of to this is, of all possible overloads for each operator, mark one (the standard integer one) as the default to use if no types are specified.
By the way, my compiler should be able to deal with any variations on type inference, automatic promotion, overloading, etc. that appear in any other Extended Spins. I'll define "*." and overloaded "*" operators in interface files, and supplementary interface files can be written for each floating point implementation that link the (unmodified Standard Spin) implementation to the interface. #ifdefs in these interface files can be used to enable or disable overloading and automatic type conversion as desired by the programmer. The only reason I'm saying anything here about syntax and semantics for this is because it would be nice if there was compatibility to at least some extent between the various Extended Spins, and because having automatic type conversions and such would ruin many of my plans, so I'll have to make them optional (and default to being disabled).
I'd like my compiler to eventually be able to emit Standard Spin, after evaluating all macros, replacing new operators with method calls, etc.. Once the framework for that is in place (I wouldn't expect it for a long, long time, if ever), it would probably be pretty easy to also make it able to output c++.
Could you elaborate? What about it? It's probably an error but it should still compile to "x := someArray[$40c947ae]" (possibly with a warning), since that's what Standard Spin would do.
Well, traditionally arrays have a integer number of elements. The first one being 0 or 1 but sometimes something else.
Indexing such an array with a float makes no sense. Using the bits of the float as an integer index is crazy. Rounding down or to the nearest integer is reasonable. Throwing an error might be best.
Allowing mixed operations/automatic type conversions would make my type-inference plans hopeless.
I don't think it makes it hopeless. If the compiler knows the types of x and y then it knows the type of x+y, regardless of whether x or y needs promotion. So I don't think it really changes the difficulty, as long as the promotion depends only on the types of the inputs.
Automatic type conversions would mean return type inference would still be easy, but inferring argument types would become hopeless.
No more hopeless than before. Consider
PUB select(x, y)
if (global_flag)
return x
else
return y
How can the compiler determine the return type of the function "select"? It's going to have to infer it from the actual usage of the "select" function.
I think there are 3 solutions:
(1) Disallow overloading and type promotion for parameters to functions. In this case the compiler can figure out the types of x and y based on what parameters are actually passed in to select. If the user passes different types for x and y, or different types at different call sites, throw an error.
(2) Require explicit type annotation on parameters if there is ambiguity.
(3) Allow overloading, and output multiple versions of the function depending on what parameters have been passed in (so there's a select.float, select.int, etc.). This will complicate type discovery even more, since now the type of the result is ambiguous, so it's probably not a good idea.
I think that type should be defined explicitly for floats and pointers. A long variable does not require an explicit type specifier. So we could define a method like this:
PUB float testsub(x, float y, float @z)
return float(x) * y / float[z][3]
The method testsub returns a float value. The parameters x, y and z are long, float and float pointer respectively. It may be confusing to use the keyword float differently in different contexts, but I think it makes sense intuitively. I think it makes sense to use the "@" character to indicate the parameter is a pointer.
Including float and @ in the declaration will allow the compiler to figure out the proper way to interpret the arithmetic operators, and it also allows for error checking of calling parameters.
I don't like the idea of adding function overloading to Spin, such as the select.float and select.int examples. After all, we're talking about Spin and not C++. If someone want to do function overloading they should program in C++. I think we should keep Spin as simple as possible, and only add enhancements that make programming easier.
Allowing mixed operations/automatic type conversions would make my type-inference plans hopeless.
I don't think it makes it hopeless. If the compiler knows the types of x and y then it knows the type of x+y, regardless of whether x or y needs promotion. So I don't think it really changes the difficulty, as long as the promotion depends only on the types of the inputs.
Automatic type conversions would mean return type inference would still be easy, but inferring argument types would become hopeless.
No more hopeless than before. Consider
PUB select(x, y)
if (global_flag)
return x
else
return y
How can the compiler determine the return type of the function "select"? It's going to have to infer it from the actual usage of the "select" function.
I think there are 3 solutions:
(1) Disallow overloading and type promotion for parameters to functions. In this case the compiler can figure out the types of x and y based on what parameters are actually passed in to select. If the user passes different types for x and y, or different types at different call sites, throw an error.
(2) Require explicit type annotation on parameters if there is ambiguity.
(3) Allow overloading, and output multiple versions of the function depending on what parameters have been passed in (so there's a select.float, select.int, etc.). This will complicate type discovery even more, since now the type of the result is ambiguous, so it's probably not a good idea.
Your problem is not a problem at all. Your #3 is closest to my (the OCaml) solution.
In your select function, x, y, and result all must have the same type, whatever type that may be. Since no type was specified, they can be anything, as long as they're all the same. For example, "float_var := select(1.0, 2.0)" and "int_var := select(3, 4)" are both perfectly valid, while "int_var := select(1.0, 2.0)" or "int_var := select(1.0, 5)" emit warnings since the types of x, y, and result aren't all the same. All four of those examples would call the same exact copy of the method, as if it were compiled with a Standard Spin compiler.
I didn't invent this type inference system. Whoever invented OCaml (or an earlier ML language) probably did. It works in OCaml. OCaml has no overloading anywhere, no explicit type annotations on parameters, and no ambiguity in type inference.
PUB select(x, y)
if (global_flag)
return x
else
return y
How can the compiler determine the return type of the function "select"? It's going to have to infer it from the actual usage of the "select" function.
I think there are 3 solutions:
(1) Disallow overloading and type promotion for parameters to functions. In this case the compiler can figure out the types of x and y based on what parameters are actually passed in to select. If the user passes different types for x and y, or different types at different call sites, throw an error.
(2) Require explicit type annotation on parameters if there is ambiguity.
(3) Allow overloading, and output multiple versions of the function depending on what parameters have been passed in (so there's a select.float, select.int, etc.). This will complicate type discovery even more, since now the type of the result is ambiguous, so it's probably not a good idea.
Your problem is not a problem at all. Your #3 is closest to my (the OCaml) solution.
Yes; I think we're violently in agreement in most respects . My point is just that operator overloading doesn't really change the landscape any. The two functions:
PUB select(x,y)
if (global_flag)
return x
return y
and
PUB add(x,y)
return x+y
have exactly the same type ambiguity when operator overloading is allowed, so any solution to the first one will also work for the second.
The only difference I can see is that *if* all types are storage compatible (have exactly the same size and can be stored in the same registers) then the select example can use the same code for all versions of the method. But in practice even Spin has different sized types, so if the select function does a pointer dereference (return @x and return @y) then different code will have to be emitted depending on whether byte, word, or long pointers are used. Another way to think of it is that Spin already has operator overloading; the meaning of "@" depends on what kind of pointer is being dereferenced.
PUB select(x, y)
if (global_flag)
return x
else
return y
How can the compiler determine the return type of the function "select"? It's going to have to infer it from the actual usage of the "select" function.
I think there are 3 solutions:
(1) Disallow overloading and type promotion for parameters to functions. In this case the compiler can figure out the types of x and y based on what parameters are actually passed in to select. If the user passes different types for x and y, or different types at different call sites, throw an error.
(2) Require explicit type annotation on parameters if there is ambiguity.
(3) Allow overloading, and output multiple versions of the function depending on what parameters have been passed in (so there's a select.float, select.int, etc.). This will complicate type discovery even more, since now the type of the result is ambiguous, so it's probably not a good idea.
Your problem is not a problem at all. Your #3 is closest to my (the OCaml) solution.
Yes; I think we're violently in agreement in most respects . My point is just that operator overloading doesn't really change the landscape any. The two functions:
PUB select(x,y)
if (global_flag)
return x
return y
and
PUB add(x,y)
return x+y
have exactly the same type ambiguity when operator overloading is allowed, so any solution to the first one will also work for the second.
No they don't. "select" passes x and y through, without caring what type they have. Whether you pass it ints or floats, the same exact method, in the same place in hubram, will get called. However, when operator overloading is allowed, the compiler must emit two separate versions of "add", one which uses integer addition and the other which uses float addition.
The only difference I can see is that *if* all types are storage compatible (have exactly the same size and can be stored in the same registers) then the select example can use the same code for all versions of the method. But in practice even Spin has different sized types, so if the select function does a pointer dereference (return @x and return @y) then different code will have to be emitted depending on whether byte, word, or long pointers are used. Another way to think of it is that Spin already has operator overloading; the meaning of "@" depends on what kind of pointer is being dereferenced.
That doesn't matter for this select function, since all parameters and locals are longs, and x and y in the select function are both parameters.
EDIT: I understand now. That's a good point. I'll have to think about that.
Comments
I was thinking OpenSpin targeted the actual Spin runtime, which seems more compact than the C/C++ interpreter. I've always wished that the Spin tool would optimize - so much existing code would benefit from it.
That's $ on my keyboard. I guess that's the next problem challenge. Finding a readable character that is common to various character sets and keyboards.
I'd also like to propose that .+, .-, .*, ./, .^, .=<, etc. be used for floating point operators, instead of overloading standard Spin operators that want ints and creating a potential source of errors. That way, the can of worms that comes with operator overloads is avoided, and you won't run into trouble if you only put your optional types in some places. These .-prefixed operators are borrowed from OCaml, which makes you do all casts explicitly and has no overloads (a feature!). Also, then .^ can mean exponentiate; it certainly would be very confusing for ^ to mean xor for ints and exponentiate for floats.
YES PLEASE! Then create LLVM code generators for the Propeller 1 and 2 and use common code generators for Spin and C! =O
My only concerns about adding features to Spin is that we need to keep the language as simple as possible. A novice can learn Spin very quickly because it does not have a lot of features. I think any enhancements that are added should help to fix common programming errors. The compiler should check for common mistakes and warn the programmer.
Types in Spin need to be left optional, since types don't even exist in standard Spin. Optional types combined with operator overloading seems to me like a recipe for disaster. It needs to be able to deal with floats that aren't typed.
What if you multiply two values that came from methods in an external object that you didn't write and would rather not modify and that is written in standard Spin (and hence has no declared return value type)?
How is the compiler supposed to know whether o.foo and o.bar return floats or ints? The programmer knows, and the comments (that the compiler can't read) in StandardSpinObject.spin probably specify, but the compiler doesn't know. Since it doesn't know, it has no way of knowing if it should use f.fmul or integer "*". If the programmer uses an editor macro to replace all instances of "f.fmul(a, b)" with "a * b", he'll be left with a lot of bugs. On the other hand, if he replaces f.fmul(a, b) with a *. b, it will work fine. The only other alternative would be filling your code with a lot of ugly casts.
"(float)x * (float)y" is a lot uglier than "x *. y". Also, the first is a potential source of confusion, since it might not be obvious to everyone wheter that first "(float)" applies to just "x" or to the whole expression "x * (float)".
Extended Spin should change nothing in standard Spin. It should accept a standard Spin file and emit output with no warnings that does exactly the same thing (but is maybe optimized). It should not change the meaning of any already-valid Spin constructs. Any new Extended Spin features should not compile with a standard Spin compiler.
Look into OCaml's type system. Not the part about variants and matches and such, just the part about type inference. I've never really needed to use OCaml and don't know it very well, but its type system seems really well designed. AFAIK, you never need to define function argument types, and yet it figures them out anyway by looking at what operators and functions the arguemnts are fed to and where the return value comes from. That's what I want to do (where types are defined, that is). I think we might be able to find a way to add that to Spin without breaking backwards compatibility.
https://ocaml.org/learn/tutorials/basics.html#Typeinference
https://ocaml.org/learn/tutorials/basics.html#Isimplicitorexplicitcastingbetter
(I just realized that OCaml actually uses "*.", "+.", etc. and not ".*", ".+", etc. I don't really prefer one over the other, or if something entirely different (that doesn't overload standard Spin operators) is used.)
Spin already has an implicit type associated with constants defined in a CON section. The are either float or long. You cannot mix floats with longs in constant expressions. They are either float or long. So Spin is already overlaying the operators in CON sections.
Users may also want to harvest pieces of code, in some cases coming from other languages, and adding special operators certainly does not assist there.
1) Automatically promote operands in mixed-mode operations to the highest type between the two before the operation, optionally allowing float and int (à la MS Basic). (Spin already promotes bytes and words to longs pre-op.)
2) Same as #1, but issue warnings for mixed-mode operations.
3) Forbid mixed-mode operations, requiring the use of float and int where necessary (à la Fortran).
I'm just thinking about what would be the easiest to teach my students. Thus far, I've told them that Spin doesn't do floating point, just to avoid the FP libraries' function-call mania. For that, I'd lean towards #1, with pragmata available to revert to #2 or #3, similar to Perl's use strict.
-Phil
Allowing mixed operations/automatic type conversions would make my type-inference plans hopeless.
I'm still going to implement explicit operators ("*." and friends) in my compiler (as well as overloads), so that float operations can be done without needing to explicitly declare types.
I don't think it makes it hopeless. If the compiler knows the types of x and y then it knows the type of x+y, regardless of whether x or y needs promotion. So I don't think it really changes the difficulty, as long as the promotion depends only on the types of the inputs.
In a CON section you can do the following. However, you can't mix longs and floats, such as: Of course, implicit promotion isn't that hard to implement. Something like "2*3.4" would just be a float value of 6.8. I assume in general any mixed expression would just be promoted to float.
In the same vein, there should be pseudo-functions available to turn an integer into a float and vice versa without changing the bits, say forceint and forcefloat.
Or maybe bitwise ops should be illegal on float values, unless a forceint is applied first. One particularly onerous expression that should definitely trigger an error would be x << 3.5. Or do you simply demote the 3.5 to an integer 3 first?
Lastly, forcefloat would be absolutely necessary for any Spin methods that construct floating-point values the way the Spin-based floating-point library objects do now.
-Phil
Exactly. All type conversions are currently done explicitly, and they should remain that way.
I can't really argue against operator overloading if Standard Spin already supports it in constants.
Automatic type conversions would mean return type inference would still be easy, but inferring argument types would become hopeless.
Allowing any overloading at all makes causes problems for my type inference. Consider the following: With overloading, the compiler has no idea whether x, y, and result in my original multiply function are all ints or all floats.
With no overloading and without explicitly specifying any types, the compiler knows that x, y, and result must be integers. If you do "myfloatvar := multiply(1.0, 2.0)", the compiler would warn that you're passing floats to a function that expects ints and assigning an int to a float. If you want a float version, you can write: And the compiler will know that x, y, and result are all floats.
The most sensible solution I can think of to this is, of all possible overloads for each operator, mark one (the standard integer one) as the default to use if no types are specified.
By the way, my compiler should be able to deal with any variations on type inference, automatic promotion, overloading, etc. that appear in any other Extended Spins. I'll define "*." and overloaded "*" operators in interface files, and supplementary interface files can be written for each floating point implementation that link the (unmodified Standard Spin) implementation to the interface. #ifdefs in these interface files can be used to enable or disable overloading and automatic type conversion as desired by the programmer. The only reason I'm saying anything here about syntax and semantics for this is because it would be nice if there was compatibility to at least some extent between the various Extended Spins, and because having automatic type conversions and such would ruin many of my plans, so I'll have to make them optional (and default to being disabled).
I'd like my compiler to eventually be able to emit Standard Spin, after evaluating all macros, replacing new operators with method calls, etc.. Once the framework for that is in place (I wouldn't expect it for a long, long time, if ever), it would probably be pretty easy to also make it able to output c++.
x := someArray[6.29]
Could you elaborate? What about it? It's probably an error but it should still compile to "x := someArray[$40c947ae]" (possibly with a warning), since that's what Standard Spin would do.
-Phil
You'll be able to do that with my compiler by overloading [] for floats.
Indexing such an array with a float makes no sense. Using the bits of the float as an integer index is crazy. Rounding down or to the nearest integer is reasonable. Throwing an error might be best.
I like the linear interpolation idea !
and here was me thinking that was a Boolean array, and it fetched Bit 29 from Long offset 6
No more hopeless than before. Consider How can the compiler determine the return type of the function "select"? It's going to have to infer it from the actual usage of the "select" function.
I think there are 3 solutions:
(1) Disallow overloading and type promotion for parameters to functions. In this case the compiler can figure out the types of x and y based on what parameters are actually passed in to select. If the user passes different types for x and y, or different types at different call sites, throw an error.
(2) Require explicit type annotation on parameters if there is ambiguity.
(3) Allow overloading, and output multiple versions of the function depending on what parameters have been passed in (so there's a select.float, select.int, etc.). This will complicate type discovery even more, since now the type of the result is ambiguous, so it's probably not a good idea.
Including float and @ in the declaration will allow the compiler to figure out the proper way to interpret the arithmetic operators, and it also allows for error checking of calling parameters.
I don't like the idea of adding function overloading to Spin, such as the select.float and select.int examples. After all, we're talking about Spin and not C++. If someone want to do function overloading they should program in C++. I think we should keep Spin as simple as possible, and only add enhancements that make programming easier.
Your problem is not a problem at all. Your #3 is closest to my (the OCaml) solution.
In your select function, x, y, and result all must have the same type, whatever type that may be. Since no type was specified, they can be anything, as long as they're all the same. For example, "float_var := select(1.0, 2.0)" and "int_var := select(3, 4)" are both perfectly valid, while "int_var := select(1.0, 2.0)" or "int_var := select(1.0, 5)" emit warnings since the types of x, y, and result aren't all the same. All four of those examples would call the same exact copy of the method, as if it were compiled with a Standard Spin compiler.
I didn't invent this type inference system. Whoever invented OCaml (or an earlier ML language) probably did. It works in OCaml. OCaml has no overloading anywhere, no explicit type annotations on parameters, and no ambiguity in type inference.
Yes; I think we're violently in agreement in most respects . My point is just that operator overloading doesn't really change the landscape any. The two functions: and have exactly the same type ambiguity when operator overloading is allowed, so any solution to the first one will also work for the second.
The only difference I can see is that *if* all types are storage compatible (have exactly the same size and can be stored in the same registers) then the select example can use the same code for all versions of the method. But in practice even Spin has different sized types, so if the select function does a pointer dereference (return @x and return @y) then different code will have to be emitted depending on whether byte, word, or long pointers are used. Another way to think of it is that Spin already has operator overloading; the meaning of "@" depends on what kind of pointer is being dereferenced.
No they don't. "select" passes x and y through, without caring what type they have. Whether you pass it ints or floats, the same exact method, in the same place in hubram, will get called. However, when operator overloading is allowed, the compiler must emit two separate versions of "add", one which uses integer addition and the other which uses float addition.
That doesn't matter for this select function, since all parameters and locals are longs, and x and y in the select function are both parameters.
EDIT: I understand now. That's a good point. I'll have to think about that.