..... This time we have 16 cogs and really high speed, so a math library wouldn't be as consuming as it were on Propeller 1. To me, that's a much easier sell than it was on Propeller 1 where it took at least a few cogs if I recall correctly.
...
When features are so easy to add they could allow Parallax to leverage them for the next revision. Future improvements, no matter how fast they are completed, show that we're in this business for Parallax and our customers. Of course, I don't know anything about the design time required to add floating point, so it's very difficult to say if it's worth the effort at this stage. However, please feel free to disagree with me.
I"d suggest talk to any customers, and check what SPEED they need on floating point.
It can be done with the design as it is now, but it may be there are helper operations that can be placed in the single MathBlock, that can speed Float, for not too much silicon impact.
To find those, will need to port and test some existing Float libraries, in the FPGA release.
As Chip has mentioned, it is easier to modify the MathBlock, than to re-spin the COGs
So why make it more C'ish by adding complexity that will upset people? If you want C'ish use C, right.
Why make it more C'ish? So I don't have to write F.DIV(F.MULT(x,y),z). So the next time I forget the "@" in ser.str(bufstr) the compiler will tell me I need to pass a pointer to ser.str, and not a scalar value. So that the next time a newbie does ser.dec(x), where x is a floating point number, the compiler will tell him there's a problem.
All this talk of cordic and floating point and typing and...and..etc. I am surprised no one has mentioned adding a MAC instruction and the hardware for it. Or did I miss that post.
Please no. I'm not interested in floating point feature creep. There are lots of 'cool' things parallax can do - it should do the ones that are needed to sell some chips soon, then there's money to make the fantasy chip.
We already have types for global VARs, and the type for params and locals is currently forced to be long. It would be fairly trivial to have a new float type, and allow params and locals to by typed.
I feel like having types on params is useful, particularly for type checking (which currently doesn't exist and allows for confusing and fail prone code).
I dislike and will strongly fight against any kind of silly decorations/wrappers to indicate types for vars/params/locals. The tick mark is horrible. Wrapping with float(...) is just wrong, it already means convert to float. The variable types should be used to determine the operations to do.
Also, it doesn't matter if Chip puts in float math into the hardware, Spin2 should have float math built in. It's practically broken that Spin doesn't already have float math built in.
Finally, having float math doesn't make Spin/Spin2 harder to understand or use. If anything it makes it easier, since most people are familiar with and expect to use float math, and you have to do special Smile to deal with the fact that you don't have float math built in.
VAR
long x
long y
long z
DAT
y long 22
PUB start
x := 4
z := DoIt (42, 666)
PUB DoIt( float l, float r )
return l * x + y / r
Don't we end up needing to put types on all declarations in VAR and DAT and the local variables and the function returns to make this work. Before you know it you have C or perhaps Pascal.
All of this language debate is somewhat independent of whether the we have hardware float support or not.
VAR
long x
long y
long z
DAT
y long 22
PUB start
x := 4
z := DoIt (42, 666)
PUB DoIt( float l, float r )
return l * x + y / r
Don't we end up needing to put types on all declarations in VAR and DAT and the local variables and the function returns to make this work. Before you know it you have C or perhaps Pascal.
All of this language debate is somewhat independent of whether the we have hardware float support or not.
Personally I don't see what all the fuss is about, if we get it, it's a bonus, no matter how easily or strangely it's integrated into PASM and SPIN.
It's not a necessary bonus, as F32 was fine, and it'll be faster now with mathhub and a 5+* faster controller
You could even have ( this is extreme I know ) but have
PUB DoIt( float l, float r )
return (FLOATMATH) l * x + y / r
or
return (INTMATH) l * x + y / r
You could even have it integrate the F32 directly into the compiler, I don't mind either way, a lot of it I might not use, as 32bit ints are more than adequate, and fixed point maths, but that's just the oldskool in me talking but if it's there and I need to use it, I'll use it by what ever means that it needs using.
To me the P2, in whatever form it'll be in, will be the most awesome fun ever! I can't wait to play with it if it has F32... great, if it doesn't... great, I'm just gonna be grateful with what we get.
Admittedly it'll be better to have it nice n simple like P1 was, but in the P1 cog vars were always 32bit ints anyway, maybe we have it as a 32bit int but when you want to correctly do that return line have
return l `* `x + `y `/ r
or instead of ` we could use £ as I don't think we've used £ symbol yet.
the ` would turn the next var or operator into a float version of it.
or if you wanted it int related
return ±l * x + y / ±r
using ± as a float to int operator.
like I said, I honestly don't care what operator gets used, I'll just be happy using what we've got in whatever way we've got it.
We already have types for global VARs, and the type for params and locals is currently forced to be long. It would be fairly trivial to have a new float type, and allow params and locals to by typed.
I feel like having types on params is useful, particularly for type checking (which currently doesn't exist and allows for confusing and fail prone code).
I dislike and will strongly fight against any kind of silly decorations/wrappers to indicate types for vars/params/locals. The tick mark is horrible. Wrapping with float(...) is just wrong, it already means convert to float. The variable types should be used to determine the operations to do.
Also, it doesn't matter if Chip puts in float math into the hardware, Spin2 should have float math built in. It's practically broken that Spin doesn't already have float math built in.
Finally, having float math doesn't make Spin/Spin2 harder to understand or use. If anything it makes it easier, since most people are familiar with and expect to use float math, and you have to do special Smile to deal with the fact that you don't have float math built in.
It's seldom I quote an entire post, but I agree with everything you said!
As to hardware floating point: I can't imagine it being more important than something else that would have to be left out to include it.
And MAC: it was in P2. Did it go away? If so, are we leaning away from signal-processing apps?
Doing it a "clean" way is never going to happen with floats and ints
if you want it as transparent as possible, we either lose floats, or we lose ints.
especially when you want to do
return (float) l * (int) x + (int) y / (float) r
Without telling the compiler how you want it to handle the cross over, that's why in C etc you have to cast it.
You have more experience with how compilers etc work than me, so if you can come up with a great clean way to keep SPIN as simple and clean as possible with being able to handle that line correctly, when it's typed as return l * x + y / r then please let me know, I for one haven't the foggiest idea without some form of "casting" and I use casting in the loosest of terms, as I too would rather have clean spin than messy C
Please no. I'm not interested in floating point feature creep. There are lots of 'cool' things parallax can do - it should do the ones that are needed to sell some chips soon, then there's money to make the fantasy chip.
Never mind feature creep. This change to SPIN doesn't get us anything that can't already be done with F32. I agree that there's a usability value, but the tradeoff is SPIN now has a software library embedded in it, whether you need floating-point support or not.
Baggers,
There doesn't need to be any visible casting. It can all be handled based on the expression and the assignment.
Seairth,
The plan is that Spin2 will include modules/snippets of code as needed. Remember, the Spin2 runtime is part of the compile and download into the chip, it's not in ROM. We plan to have things like "built in" I2C support via just including some premade code assigned to keywords (snippets) whenever the code uses those keywords. So your simplest code will include only the bare minimum Spin2 runtime, while a complex program would include much more.
We have even discussed making it so that what is "in cog" for the Spin2 runtime can be determined at compile time. Spin2 is going to be very nice if everything Chip, Jeff, Myself, and others have discussed/planned gets done.
Baggers,
There doesn't need to be any visible casting. It can all be handled based on the expression and the assignment.
Seairth,
The plan is that Spin2 will include modules/snippets of code as needed. Remember, the Spin2 runtime is part of the compile and download into the chip, it's not in ROM. We plan to have things like "built in" I2C support via just including some premade code assigned to keywords (snippets) whenever the code uses those keywords. So your simplest code will include only the bare minimum Spin2 runtime, while a complex program would include much more.
We have even discussed making it so that what is "in cog" for the Spin2 runtime can be determined at compile time. Spin2 is going to be very nice if everything Chip, Jeff, Myself, and others have discussed/planned gets done.
What to do with a * b. Is that a integer mul or a float mul?
One way to tell is sticking types on everything.
But what if one is an int and the other is a float?
You can convert them both to float and get the wrong answer most of the time.
Or you can convert them to int and get the wrong answer most of the time.
Or generate an error as Ada would.
Or you can let the programmer decide by using a function call like F32 or a weird operator. Or perhaps allowing the programmer to specify the type at the point of operation: float(a) * int(b). All of which are messy.
People have been fighting with these problems for decades. None of the solutions are acceptable by everyone whether it be strict and throw an error like Ada or weird automatic type conversions like C.
There doesn't need to be any visible casting. It can all be handled based on the expression and the assignment.
I certainly hope it isn't too automatic! Writing bit 6 of a float to a pin is definitely going to be the wrong thing. Better to develop in Pascal than to have that happen.
Most of this discussion about adding types is pointless since Parallax would never allow such a thing. So I think Spin programmers are forever condemned to writing code like F.MULT(x,F.ADD(y,z)). There will always be newbies that will fall into the various Spin traps, and not get any warnings from the compiler. If there was ever any interest in improving the Spin language it would have been done over the past several years. Thank goodness that Parallax has listened to the education market and realized that there was sufficient reason to support C on the Prop.
So my suggestion is to keep Spin the way it is, and quit wasting time trying to convince Parallax that it needs to be improved.
Most of this discussion about adding types is pointless since Parallax would never allow such a thing. So I think Spin programmers are forever condemned to writing code like F.MULT(x,F.ADD(y,z)). There will always be newbies that will fall into the various Spin traps, and not get any warnings from the compiler. If there was ever any interest in improving the Spin language it would have been done over the past several years. Thank goodness that Parallax has listened to the education market and realized that there was sufficient reason to support C on the Prop.
So my suggestion is to keep Spin the way it is, and quit wasting time trying to convince Parallax that it needs to be improved.
Couldn't agree more, quite happy to stick to F.MULT(x,F.ADD(y,z)) keeps SPIN nice and clean then too!
What to do with a * b. Is that a integer mul or a float mul?
One way to tell is sticking types on everything.
But what if one is an int and the other is a float?
You can convert them both to float and get the wrong answer most of the time.
Or you can convert them to int and get the wrong answer most of the time.
Or generate an error as Ada would.
Or you can let the programmer decide by using a function call like F32 or a weird operator. Or perhaps allowing the programmer to specify the type at the point of operation: float(a) * int(b). All of which are messy.
People have been fighting with these problems for decades. None of the solutions are acceptable by everyone whether it be strict and throw an error like Ada or weird automatic type conversions like C.
If a float is involved in the expression then upcasting and doing float math with give the correct answer most of the time. It'll only be wrong in weird obscure cases that mostly don't matter. I think it's silly to say it would be wrong most of the time.
We can make it work without decorations, and perhaps have some casting operators as optional things to force things the way you want it. However, most users will not need it.
I think it's more of a mistake to not have floating point math in the language than nay of this other debate over how to do it.
What? You are going to have functionality like I2C turn up as keywords in the language rather than being objects and functions.
The actual syntax is still undecided, and may in fact end up being some kind of "built in" object or something else. The idea is that we want to make it easy for anyone to use common external devices via I2C, SPI, etc. without needing to go find and know to include some external object/library/whatever. We will try to be consistent and simple, and likely extend OpenSpin to do the same thing.
Most of this discussion about adding types is pointless since Parallax would never allow such a thing. So I think Spin programmers are forever condemned to writing code like F.MULT(x,F.ADD(y,z)). There will always be newbies that will fall into the various Spin traps, and not get any warnings from the compiler. If there was ever any interest in improving the Spin language it would have been done over the past several years. Thank goodness that Parallax has listened to the education market and realized that there was sufficient reason to support C on the Prop.
So my suggestion is to keep Spin the way it is, and quit wasting time trying to convince Parallax that it needs to be improved.
What makes you think Parallax won't allow it? Quit the contrary, my discussions with them has shown that they want float math support to be built in/easier to use. It's my understanding that Spin on P1 was supposed to have more built in float support than it did.
My opinion is based on the hundreds of posts that have been made over the past 5 years requesting improvements to Spin, and the inaction that resulted from those posts.
If a float is involved in the expression then upcasting and doing float math with give the correct answer most of the time. It'll only be wrong in weird obscure cases that mostly don't matter. I think it's silly to say it would be wrong most of the time.
This may seem like a quibble but I respectfully disagree. Because:
With a 32 bit signed twos complement representation I can can represent positive integers up to 2 to the power 32 - 1 which is is 2147483647.
If I "up cast" that to a 32 bit float I only get 24 bits of precision so the biggest positive integer I can represent accurately is 2 to the power 24 - 1 which is 16777215.
That means that when up casting 2147483647 - 16777215 or 2130706432 of my possible values, about 99.2%, are not quite right. That is to say they are wrong.
If I "down cast" we have a similar problem. I'll leave it to the reader to calculate the percentage of wrong results.
So. Mostly wrong. This is why languages like Ada don't automatically do this kind of thing. They want you to know that you have a problem on your hands.
The C approach is very lax and takes the view that, well OK we are losing accuracy here but, hey, the programmer knows what he is doing and he is working with floats so a little slop is OK.
JavaScript pushes this even further by making all numbers 64 bit floats. Then we can handle 53 bit integers accurately as well. Big enough for any one right? But still people complain because "0.1 + 0.2 == 0.3" is false. You just can't win.
Re: The language extensions. I'd be much happier with the built in objects. Sounds like a great idea.
Heater,
You forgot (or just excluded for simplicity) that that ~16million is repeated many times across the 32bit range (128 times), and that is effectively doubled because of the sign (for signed int/longs anyway). A 32bit float can represent ~469.76 million ints accurately, and twice that many signed ints. Anyway, it's not 99.2% wrong, although by your definition it is a large percentage wrong.
In most practical use cases, it will be just fine.
Also, in the case of 0.1 + 0.2 == 0.3 being false is a matter of incorrect == operator.
Anyway, I feel like you want to throw out the many many practical use cases where it would be easy and correct, just because there are cases when it's not (and when it's not, most of the time it's a matter of precision not complete whack wrong). We can easily warn about precision loss on autocasting if that makes you feel better?
In any case, I think it's a mistake to not include floating port support as part of the language properly. We can quibble about the syntax, but I think the simple syntax is best with optional extras for advanced use cases.
As for language extensions, I think built in objects is the simplest and most consistent way to do it. Perhaps a reserved object name like "io" or "util" or something suitably generic.
My opinion is based on the hundreds of posts that have been made over the past 5 years requesting improvements to Spin, and the inaction that resulted from those posts.
Inaction doesn't mean "will not allow". In this case, it was mostly a case of not having the ability to make changes to Spin without involving Chip, who has been busy with P2 stuff the entire time. Once OpenSpin got to a reasonable state, it became possible, and I have added a couple features to Spin (preprocessor, symbols can be 254 chars instead of 30, and some other things), and plan to add more. Some of the things I'd like to add are best done after we know what we are doing for Spin2.
Inaction doesn't mean "will not allow". In this case, it was mostly a case of not having the ability to make changes to Spin without involving Chip, who has been busy with P2 stuff the entire time. Once OpenSpin got to a reasonable state, it became possible, and I have added a couple features to Spin (preprocessor, symbols can be 254 chars instead of 30, and some other things), and plan to add more. Some of the things I'd like to add are best done after we know what we are doing for Spin2.
Agreed, there is actually more than one Spin, and the OpenSpin work looks very well based.
Comments
PUB DoIt( l , r )
for ints, and
PUB DoIt( float l, float r )
when you want to indicate to the spin compiler that it needs to use float routines.
or float * for pointers. yes this looks a little like that yucky C thing, but it could also help with mainstreaming?
Either way, we need to check with Chip if it's a quick and easy enough addition to put floats in the Math-hub.
Before going into a long major discussion on which is the best way to put it into SPIN2
I"d suggest talk to any customers, and check what SPEED they need on floating point.
It can be done with the design as it is now, but it may be there are helper operations that can be placed in the single MathBlock, that can speed Float, for not too much silicon impact.
To find those, will need to port and test some existing Float libraries, in the FPGA release.
As Chip has mentioned, it is easier to modify the MathBlock, than to re-spin the COGs
I feel like having types on params is useful, particularly for type checking (which currently doesn't exist and allows for confusing and fail prone code).
I dislike and will strongly fight against any kind of silly decorations/wrappers to indicate types for vars/params/locals. The tick mark is horrible. Wrapping with float(...) is just wrong, it already means convert to float. The variable types should be used to determine the operations to do.
Also, it doesn't matter if Chip puts in float math into the hardware, Spin2 should have float math built in. It's practically broken that Spin doesn't already have float math built in.
Finally, having float math doesn't make Spin/Spin2 harder to understand or use. If anything it makes it easier, since most people are familiar with and expect to use float math, and you have to do special Smile to deal with the fact that you don't have float math built in.
What is this going to do then:
Don't we end up needing to put types on all declarations in VAR and DAT and the local variables and the function returns to make this work. Before you know it you have C or perhaps Pascal.
All of this language debate is somewhat independent of whether the we have hardware float support or not.
That was not my point. My point was that if you are going to add complexity and dirty up the language to get it to do what C does why not just use C?
Like that guy putting the big wheels on his Ferrari so he can use it as a tractor, just use a tractor instead.
Of course there may be a super clean way to do all this, hence the discussion.
Personally I don't see what all the fuss is about, if we get it, it's a bonus, no matter how easily or strangely it's integrated into PASM and SPIN.
It's not a necessary bonus, as F32 was fine, and it'll be faster now with mathhub and a 5+* faster controller
You could even have ( this is extreme I know ) but have
PUB DoIt( float l, float r )
return (FLOATMATH) l * x + y / r
or
return (INTMATH) l * x + y / r
You could even have it integrate the F32 directly into the compiler, I don't mind either way, a lot of it I might not use, as 32bit ints are more than adequate, and fixed point maths, but that's just the oldskool in me talking but if it's there and I need to use it, I'll use it by what ever means that it needs using.
To me the P2, in whatever form it'll be in, will be the most awesome fun ever! I can't wait to play with it if it has F32... great, if it doesn't... great, I'm just gonna be grateful with what we get.
Admittedly it'll be better to have it nice n simple like P1 was, but in the P1 cog vars were always 32bit ints anyway, maybe we have it as a 32bit int but when you want to correctly do that return line have
return l `* `x + `y `/ r
or instead of ` we could use £ as I don't think we've used £ symbol yet.
the ` would turn the next var or operator into a float version of it.
or if you wanted it int related
return ±l * x + y / ±r
using ± as a float to int operator.
like I said, I honestly don't care what operator gets used, I'll just be happy using what we've got in whatever way we've got it.
As to hardware floating point: I can't imagine it being more important than something else that would have to be left out to include it.
And MAC: it was in P2. Did it go away? If so, are we leaning away from signal-processing apps?
-Phil
if you want it as transparent as possible, we either lose floats, or we lose ints.
especially when you want to do
return (float) l * (int) x + (int) y / (float) r
Without telling the compiler how you want it to handle the cross over, that's why in C etc you have to cast it.
You have more experience with how compilers etc work than me, so if you can come up with a great clean way to keep SPIN as simple and clean as possible with being able to handle that line correctly, when it's typed as return l * x + y / r then please let me know, I for one haven't the foggiest idea without some form of "casting" and I use casting in the loosest of terms, as I too would rather have clean spin than messy C
Never mind feature creep. This change to SPIN doesn't get us anything that can't already be done with F32. I agree that there's a usability value, but the tradeoff is SPIN now has a software library embedded in it, whether you need floating-point support or not.
I think SPIN should be kept as lean as possible.
Would then work, as it would know that the return value is going to be a float, so can work it accordingly
There doesn't need to be any visible casting. It can all be handled based on the expression and the assignment.
Seairth,
The plan is that Spin2 will include modules/snippets of code as needed. Remember, the Spin2 runtime is part of the compile and download into the chip, it's not in ROM. We plan to have things like "built in" I2C support via just including some premade code assigned to keywords (snippets) whenever the code uses those keywords. So your simplest code will include only the bare minimum Spin2 runtime, while a complex program would include much more.
We have even discussed making it so that what is "in cog" for the Spin2 runtime can be determined at compile time. Spin2 is going to be very nice if everything Chip, Jeff, Myself, and others have discussed/planned gets done.
So true
What to do with a * b. Is that a integer mul or a float mul?
One way to tell is sticking types on everything.
But what if one is an int and the other is a float?
You can convert them both to float and get the wrong answer most of the time.
Or you can convert them to int and get the wrong answer most of the time.
Or generate an error as Ada would.
Or you can let the programmer decide by using a function call like F32 or a weird operator. Or perhaps allowing the programmer to specify the type at the point of operation: float(a) * int(b). All of which are messy.
People have been fighting with these problems for decades. None of the solutions are acceptable by everyone whether it be strict and throw an error like Ada or weird automatic type conversions like C.
What? You are going to have functionality like I2C turn up as keywords in the language rather than being objects and functions.
I certainly hope it isn't too automatic! Writing bit 6 of a float to a pin is definitely going to be the wrong thing. Better to develop in Pascal than to have that happen.
So my suggestion is to keep Spin the way it is, and quit wasting time trying to convince Parallax that it needs to be improved.
Couldn't agree more, quite happy to stick to F.MULT(x,F.ADD(y,z)) keeps SPIN nice and clean then too!
If a float is involved in the expression then upcasting and doing float math with give the correct answer most of the time. It'll only be wrong in weird obscure cases that mostly don't matter. I think it's silly to say it would be wrong most of the time.
We can make it work without decorations, and perhaps have some casting operators as optional things to force things the way you want it. However, most users will not need it.
I think it's more of a mistake to not have floating point math in the language than nay of this other debate over how to do it.
The actual syntax is still undecided, and may in fact end up being some kind of "built in" object or something else. The idea is that we want to make it easy for anyone to use common external devices via I2C, SPI, etc. without needing to go find and know to include some external object/library/whatever. We will try to be consistent and simple, and likely extend OpenSpin to do the same thing.
What makes you think Parallax won't allow it? Quit the contrary, my discussions with them has shown that they want float math support to be built in/easier to use. It's my understanding that Spin on P1 was supposed to have more built in float support than it did.
varname! for byte
varname% for signed long
varname& for unsigned long
varname# for float
varname$ for string
This may seem like a quibble but I respectfully disagree. Because:
With a 32 bit signed twos complement representation I can can represent positive integers up to 2 to the power 32 - 1 which is is 2147483647.
If I "up cast" that to a 32 bit float I only get 24 bits of precision so the biggest positive integer I can represent accurately is 2 to the power 24 - 1 which is 16777215.
That means that when up casting 2147483647 - 16777215 or 2130706432 of my possible values, about 99.2%, are not quite right. That is to say they are wrong.
If I "down cast" we have a similar problem. I'll leave it to the reader to calculate the percentage of wrong results.
So. Mostly wrong. This is why languages like Ada don't automatically do this kind of thing. They want you to know that you have a problem on your hands.
The C approach is very lax and takes the view that, well OK we are losing accuracy here but, hey, the programmer knows what he is doing and he is working with floats so a little slop is OK.
JavaScript pushes this even further by making all numbers 64 bit floats. Then we can handle 53 bit integers accurately as well. Big enough for any one right? But still people complain because "0.1 + 0.2 == 0.3" is false. You just can't win.
Re: The language extensions. I'd be much happier with the built in objects. Sounds like a great idea.
Because decorating variable names like that is horribly ugly.
You forgot (or just excluded for simplicity) that that ~16million is repeated many times across the 32bit range (128 times), and that is effectively doubled because of the sign (for signed int/longs anyway). A 32bit float can represent ~469.76 million ints accurately, and twice that many signed ints. Anyway, it's not 99.2% wrong, although by your definition it is a large percentage wrong.
In most practical use cases, it will be just fine.
Also, in the case of 0.1 + 0.2 == 0.3 being false is a matter of incorrect == operator.
Anyway, I feel like you want to throw out the many many practical use cases where it would be easy and correct, just because there are cases when it's not (and when it's not, most of the time it's a matter of precision not complete whack wrong). We can easily warn about precision loss on autocasting if that makes you feel better?
In any case, I think it's a mistake to not include floating port support as part of the language properly. We can quibble about the syntax, but I think the simple syntax is best with optional extras for advanced use cases.
As for language extensions, I think built in objects is the simplest and most consistent way to do it. Perhaps a reserved object name like "io" or "util" or something suitably generic.
Inaction doesn't mean "will not allow". In this case, it was mostly a case of not having the ability to make changes to Spin without involving Chip, who has been busy with P2 stuff the entire time. Once OpenSpin got to a reasonable state, it became possible, and I have added a couple features to Spin (preprocessor, symbols can be 254 chars instead of 30, and some other things), and plan to add more. Some of the things I'd like to add are best done after we know what we are doing for Spin2.
Agreed, there is actually more than one Spin, and the OpenSpin work looks very well based.