PUB usecallback(funcptr)
x := METHOD(2:1)[funcptr](a, b) ' two arguments, 1 return value
or
PUB usecallback(funcptr)
x := METHOD(x,y):z[funcptr](a, b)
Although to be honest I like @"Dave Hein" 's suggestion that the compiler should just examine the context and deduce the number of parameters and return values from that.
It is true that none of these methods prevent the programmer from shooting themselves in the foot. For that we'd need to add actual types to Spin, which would be a big change to the language but would bring other benefits (and costs, too!)
We don't have it, yet.
I'm not sure if the compiler could always figure out use cases. What about nested pointer calls?
Just for reference, fastspin does allow type declarations in Spin programs, but the types are not checked except when the Spin is called from C or BASIC. So you can write something like:
PUB fadd(a = float, b = float) : c = float
to indicate a function that adds two floating point numbers. When used in Spin there's no type checking on it, but if you use it in BASIC or C and try to pass a string or other inappropriate value it will complain.
This syntax could certainly be extended to something like:
PUB usecallback(funcptr = METHOD(2:1))
x := funcptr(a, b)
...
usecallback(@mymethod)
if we wanted to get into type checking on parameters. That is quite a can of worms, but on the other hand does add a lot to the safety of the language.
PUB usecallback(funcptr)
x := METHOD(2:1)[funcptr](a, b) ' two arguments, 1 return value
or
PUB usecallback(funcptr)
x := METHOD(x,y):z[funcptr](a, b)
Although to be honest I like @"Dave Hein" 's suggestion that the compiler should just examine the context and deduce the number of parameters and return values from that.
It is true that none of these methods prevent the programmer from shooting themselves in the foot. For that we'd need to add actual types to Spin, which would be a big change to the language but would bring other benefits (and costs, too!)
We don't have it, yet.
I'm not sure if the compiler could always figure out use cases. What about nested pointer calls?
Ah, good point: in "fooptr(barptr())" we don't know how many parameters "fooptr" expects because we don't know how many "barptr" returns.
One brute force solution would be to disallow that (so indirect calls through method pointers cannot be used as parameters to method pointers). That's probably not too awful, it's not a situation that arises very often and it's always possible to work around it with some temporary variables.
A more complicated variation would be to allow it, but only if we can figure out the type in some other way, so:
x, y := barptr()
fooptr(barptr())
would be OK because the compiler can figure out that barptr returns two values before it needs to use it in the call to fooptr().
I guess theoretically the compiler could also try to trace back and figure out the signature of any methods that were assigned to the original pointers.
Some suggested syntaxes so far for calling a method via a pointer "fooptr":
x := ~~fooptr:1(a, b) ' or some other sequence like ^^ or @ @
x := MCALL(1, fooptr, a, b)
x := MCALL(1, fooptr(a, b)) ' maybe the "1," could be optional here?
x := METHOD[fooptr:1](a, b)
x := fooptr(a, b) ' with restrictions on what a and b can be
x := fooptr(a, b) ' with some hint elsewhere to the compiler about the signature of fooptr
Question: can we write just"~~fooptr" instead of "~~fooptr:1(a,b)" because I think :0 and :1 are treated the same? If so this would be a nice default.
I think I like the last two best. Any other thoughts?
Maybe method pointer calls can simply be a variable followed by '('.
METHOD POINTER ASSIGNMENT
variable := @object.method
variable := @object.method[index]
variable := @object[index].method
variable := @object[index].method[index]
variable := @method
variable := @method[index]
METHOD POINTER USAGE
params results
variable() 0 0
variable():1 0 1
variable(par1) 1+ 0
variable(par1):1 1+ 1
variable(par1,par2) 2+ 0
variable(par1,par2):2 2+ 2
variable(par1,par2,par3):4 3+ 4
Note: 1+/2+/3+ is due to the possibility of
parameters returning more than one result.
That would be really simple and not impose any restrictions on method pointer usage. The programmer had better get his parameters and results straight, though.
Maybe method pointer calls can simply be a variable followed by '('.
METHOD POINTER ASSIGNMENT
variable := @object.method
variable := @object.method[index]
variable := @object[index].method
variable := @object[index].method[index]
variable := @method
variable := @method[index]
METHOD POINTER USAGE
params results
variable() 0 0
variable():1 0 1
variable(par1) 1+ 0
variable(par1):1 1+ 1
variable(par1,par2) 2+ 0
variable(par1,par2):2 2+ 2
variable(par1,par2,par3):4 3+ 4
Note: 1+/2+/3+ is due to the possibility of
parameters returning more than one result.
That would be really simple and not impose any restrictions on method pointer usage. The programmer had better get his parameters and results straight, though.
That sounds fine. I'm concerned about the indexed variants though:
variable := @object.method[index]
variable := @method[index]
etc.
Do those assign a pointer to the index'd method after the given one? That seems like it has a lot of potential to go wrong. It also will make removing unused functions impossible, since we cannot know whether there might be an indirect reference to a method via an index.
If we leave that feature out it'll still be possible to create tables explicitly and call indirectly through the table:
functab[0] := @f0
functab[1] := @f1
...
' now use the table
functab[i](x)
The advantage is that every method that's used must be explicitly mentioned in the code, either in a direct call or with an @ in front of it, so the unused ones can be removed.
Another question: in Spin1 there really isn't a "no return value" case, all methods return a value. Is that still the case in Spin2? For example, the following 4 methods are all legal in Spin1 and the first 3 return 1 (while the last returns 0):
PUB f0 : r
r := 1
PUB f1
return 1
PUB f2
result := 1
PUB f3
' do nothing
Are f1 and f2 still legal in Spin2? If so that would be very nice, and also simplifies the method pointer syntax quite a bit, because there really is no 0 return case (every function returns a value, and the ones with no explicit result variable have a single implicit result named "RESULT"). So there's no need to write:
x := funcptr(a,b):1
we can default to a single return value and just write:
x := funcptr(a,b)
That'll cover 99% of the use cases, and when the method pointer does point to a function with multiple return values we can still do:
Eric, yes, I allowed indexes for both objects and methods, for calls as well as pointer assignments.
I implemented the method-pointer syntax I proposed earlier:
PUB go | w,x,y,z
w := @m0 'assign method pointers
x := @m1
y := @m2
z := @m3
repeat 'call methods via pointers
w()
x()
y()
z()
PRI m0
pinnot(16)
PRI m1
pinnot(17)
PRI m2
pinnot(18)
PRI m3
pinnot(19)
I see how method indexes will cause code removal to be impossible. I just figured it would be handy to be able to have a range of methods which take and return the same number of parameters and results, but all do different operations. I would hate to force people to build method pointer tables to achieve the same thing when there are lots of methods involved. Maybe in your compiler, you could give a warning that a method index is inhibiting code-removal optimization.
In Spin2, methods may return NO results, at all, because results must be discretely named. There is no automatic RESULT return value. This happened when I added multiple return values, which now range from 0 to 15. So, your f1 and f2 examples are no longer legal in my Spin2.
I hate to give you any "bad" news, but there are only going to be a few things to adapt to, and they shouldn't be a big deal. I want to write a language description very soon. The only thing I have left to do in the compiler now is add the COGSPIN command which starts Spin2 programs. After that, I'll integrate the interpreter code into PNut.exe and we can start using it. Once it's running, there will probably be a few tweaks, but I think it ought to stabilize quickly, as it's pretty simple.
I see how method indexes will cause code removal to be impossible. I just figured it would be handy to be able to have a range of methods which take and return the same number of parameters and results, but all do different operations. I would hate to force people to build method pointer tables to achieve the same thing when there are lots of methods involved. Maybe in your compiler, you could give a warning that a method index is inhibiting code-removal optimization.
It's not just my compiler that will want to do unused method removal, it's something you may want to add to your compiler eventually. Openspin, bstc, and I think homespun all implemented that, it's a very useful (and often requested) feature.
On the flip side, how often are the indexed method pointer computations going to be used in practice? And what are the potential ways the user can get them wrong? For example if they re-arrange any code, or get an index out of range? At least with an explicit method pointer table if they add a new method or re-arrange methods everything will still work. As a minor benefit, removing a feature makes documentation and testing simpler .
In Spin2, methods may return NO results, at all, because results must be discretely named. There is no automatic RESULT return value. This happened when I added multiple return values, which now range from 0 to 15. So, your f1 and f2 examples are no longer legal in my Spin2.
I'm just suggesting that if you do provide a default RESULT return value then "funcptr():0" and "funcptr():1" become the same thing, which really simplifies the syntax for indirect calls (they can both be just "funcptr()"). And it has the additional benefit of preserving Spin1 compatibility.
This actually could be a big deal, because it's legal to ignore return values, so it's difficult to know whether a serial tx method that's normally called by
ser.tx(c)
returns a value or not. If you do something like:
fptr := @ser.tx
how does fptr get called? Is it "fptr(c)" or "fptr(c):1"? What if some serial objects choose to return a value for tx (e.g. to indicate an error) and others don't? Then we can't mix pointers to their "tx" objects. I've actually run into a very similar problem when implementing the C library, because some C functions are "void" (return no values) and others are "int" (return an integer). It wasn't an issue for Spin1, and it'd be nice for Spin2 to avoid this problem.
When using method pointers, the ':results' suffix is only used by the compiler to settle accounts when results are going to be returned and they must be counted for matching purposes. If the method is being used as an instruction, where no results are used, there's no need for the ':results' suffix.
The way stack variables are laid out in memory is like this:
PUB ThisMethod(a,b,c,d) : e,f | g,h,i
'a' is at 0 and is a parameter variable (supplied by caller)
'b' is at 1 and is a parameter variable (supplied by caller)
'c' is at 2 and is a parameter variable (supplied by caller)
'd' is at 3 and is a parameter variable (supplied by caller)
'e' is at 4 and is a result variable (cleared on entry, returned to caller)
'f' is at 5 and is a result variable (cleared on entry, returned to caller)
'g' is at 6 and is a local variable (uninitialized, may be long/word/byte+array)
'h' is at 7 and is a local variable (uninitialized, may be long/word/byte+array)
'i' is at 8 and is a local variable (uninitialized, may be long/word/byte+array)
This works out kind of nicely for in-line PASM, as the first 16 stack variables are copied to $1E0..$1EF and given the same temporary names for use by the PASM code. They are restored to the stack when the PASM code returns.
When using method pointers, the ':results' suffix is only used by the compiler to settle accounts when results are going to be returned and they must be counted for matching purposes. If the method is being used as an instruction, where no results are used, there's no need for the ':results' suffix.
If the ':results' suffix is ignored for instructions, then there's never going to be a need for ':0'. So perhaps the default should be ':1'? Then we could write:
VAR
long ptr1, ptr2
long x, y, global
PUB go
ptr1 := @noresults
ptr2 := @oneresult
ptr1(x)
x := ptr2(y)
PUB noresults(x)
global := x
PUB oneresult(x) : r
r := x
But if, in fact, the method pointed to by ptr2 doesn't have a return value, it won't return anything, regardless of the ':results' suffix. So, any time return value(s) are expected, it's necessary to use the ':results' suffix to accurately establish your intent.
But if, in fact, the method pointed to by ptr2 doesn't have a return value, it won't return anything, regardless of the ':results' suffix. So, any time return value(s) are expected, it's necessary to use the ':results' suffix to accurately establish your intent.
OK, I think we're coming at this from different angles. To explain where I'm coming from, suppose the compiler required every indirect call to have a ':results' suffix, even the ones with no results. (I realize that's not the way your compiler works now, but bear with me.) So the user would write:
PUB go | f, g
f := @m0
g := @m1
f():0 ' invoke f, it returns nothing
g():1 ' invoke g, ignore return value
x := g():1 ' invoke g, save its return value
PRI m0
pinnot(56)
PRI m1 : r
pinnot(57)
r := 0
Now imagine a version of the compiler where if there's a call through a method pointer with no explicit ':results' then ':1' is assumed (again, I realize this is counter-factual, but I think it *could* be done). Then we would write:
f():0 ' invoke f, it returns nothing
g() ' invoke g, ignore return value
x := g() ' invoke g, save its single return value (:1 assumed)
Now my final point: if the ':results' modifier is ignored for instructions, then leaving off the ':0' on the first call shouldn't matter, so we could write:
f() 'invoke f
g() ' invoke g
x := g()
because 'f()' is used as an instruction, then it doesn't matter whether we write 'f():0' or 'f():1', right?
Otherwise, it seems to me that for the case of using 'g' as an instruction (it points to a function returning one value) then we'd always have to write "g():1".
Further to the above, for functions with multiple return values we'd always have to add ':2', ':3', etc. That's OK, those functions are fairly rare. The most common cases by far will be ':0' and ':1'.
I've implemented method pointers in fastspin's Spin dialect (they were already present in BASIC and C) using the syntax Chip specified above. There are two differences: (1) indexed method pointer addresses are not implemented, and (2) due to Spin1 compatibility the ':0' and ':1' result specifiers do the same thing and are the default. For multiple return values you still have to add explicit ':2', ':3', etc.
When Spin2 is finished I can add a compatibility switch which will at least warn about the ':result' difference. The indexed method pointers I probably won't add -- they're a lot of work(*) and I think they're actually a negative gain (**).
(*) In fastspin a direct method call like "ser.tx(c)" gets assembled directly to "calla #__serialobj_tx" rather than going through a jump table. I could create a jump table internally for use by indexed method pointer lookup, and I will if it turns out that's a very popular feature, but until then there are other things that are a bigger bang for the buck for my time
(**) since they interfere with unused object removal
I 100% agree that indexed method pointers are a negative. The cost of allowing really bad bugs and losing unused method removal are too great.
I've been coding for nearly 40 years, and I would NEVER use them. I would build a pointer table instead (where the index can be bounds checked, and rearranging functions in their source location doesn't break everything).
> @"Roy Eltham" said:
> I 100% agree that indexed method pointers are a negative. The cost of allowing really bad bugs and losing unused method removal are too great.
> I've been coding for nearly 40 years, and I would NEVER use them. I would build a pointer table instead (where the index can be bounds checked, and rearranging functions in their source location doesn't break everything).
I am almost convinced, myself. It sure is a handy little thing to be able to use in a pinch, though. It's kind of like we need some experimental modes that are not for production, but quick idea testing, only.
> @Roy Eltham said:
> I 100% agree that indexed method pointers are a negative. The cost of allowing really bad bugs and losing unused method removal are too great.
> I've been coding for nearly 40 years, and I would NEVER use them. I would build a pointer table instead (where the index can be bounds checked, and rearranging functions in their source location doesn't break everything).
I am almost convinced, myself. It sure is a handy little thing to be able to use in a pinch, though. It's kind of like we need some experimental modes that are not for production, but quick idea testing, only.
Instead of providing support for indexed methods please instead spend the time on providing object references. They would be far more useful and less error prone. Also easier for the user. Languages should be designed to make the language user's job easier not the language implementer's.
The Spin2 interpreter and compiler are working great together.
I realized the other day that HALF of the bytecodes were actually modal. They perform variable operations after the variable-setup is done from just a handful of other bytecodes. This led to making the variable-setup bytecodes end in '_RET_ SETQ #$160' to select the other set of modal bytecodes for the next XBYTE. This opened up a lot of bytecode space, giving 512 bytecodes total with no discrete switching bytecodes needed to select the set in use. The main bytecodes are all used, while the modal bytecodes are 50% used, with plenty of extra room for immediate bitfield selectors. There's not much more that these modal bytecodes need to do, after reading variables, writing variables, performing math/logic operations on variables, and setting up bitfields.
The Spin2 language seems pretty complete to me, but we'll need to use it to see where the shortcomings are. I'll start a Google Doc soon to document the language.
I still need to make the smart pin helper instructions, but there's no challenge in those, so I've been working on the deeper stuff. The only challenge with the smart-pin-helper instructions will be figuring out what functionality best supports the smart pins.
Tomorrow I want to get the flash loader working.
Aside from those things, I just need to do some cleanup in the PNut.exe menu bar. I've already simplified things, so that the compiler figures out if you've got an assembly program or a Spin2 program and works accordingly. No more different keys to get different list files and downloads. This slashes the complexity of the menus.
Here's the current interpreter. This has been a lot of fun to work on and very challenging. The code that does the Spin2 calls and returns was really difficult to work out and optimize. It took me days to spool it up if I had been away from it too long. In time, we'll find ways to improve it, I think.
So the first thing I notice is that it seems the interpreter is intended to overwrite the storage for the current clock setting.
If the SPIN interpreter was guaranteed to be the first code loaded after boot this wouldn't be a problem, but what if the interpreter is to be launched after other code is already running with the clock set to something other than the compiler supplied setting?
Wouldn't it make sense to load this code above the clock setting storage agreed by the other languages, and leave that space untouched, or at least update it for the benefit of other (non-SPIN) code?
So the first thing I notice is that it seems the interpreter is intended to overwrite the storage for the current clock setting.
If the SPIN interpreter was guaranteed to be the first code loaded after boot this wouldn't be a problem, but what if the interpreter is to be launched after other code is already running with the clock set to something other than the compiler supplied setting?
Wouldn't it make sense to load this code above the clock setting storage agreed by the other languages, and leave that space untouched, or at least update it for the benefit of other (non-SPIN) code?
If the interpreter needs to live elsewhere in memory, all those fixed locations can change, as well. The compiler will need to be able to generate code for different memory-use schemes. The compiled objects don't care, other than they were compiled with some automatic variables that would need to change. The interpreter will need to be reassembled, as well, in such a case, but that can become automatic. As it is, this code runs from $00000, just like a PASM program does. It clears $00000..$0003F for mailbox use, since boot code was originally there. At this point, I just want to make it work without too many other considerations. I'm letting it be perfect in its own sphere right now. Interoperation with other languages will come later.
I knew someone was going to notice that. It sure happened sooner than I thought it would.
Regarding the smartpins, in assembly code we normally do "dirl" to disable the smart pin before configuring registers, and "dirh" to enable it. In Spin2 should these be "pinf" and "pinh" instead? IIRC "pinf" does a "fltl" (which will have the side effect of making the pin an input) and "pinh" does "drvh", which is a superset of "dirh".
Here's the smart pin serial object I'm using with my latest version of fastspin. I'm hoping it'll be compatible with your Spin2 compiler as well (except for the #include at the end, but that's not core to the functionality).
' SmartSerial.spin2
' simple smart pin serial object for P2 eval board
' implements a subset of FullDuplexSerial functionality
'
CON
_txmode = %0000_0000_000_0000000000000_01_11110_0 'async tx mode, output enabled for smart output
_rxmode = %0000_0000_000_0000000000000_00_11111_0 'async rx mode, input enabled for smart input
VAR
long rx_pin, tx_pin
' start the serial port
' mode is ignored
PUB start(rxpin, txpin, mode, baudrate) | bitperiod, bit_mode
' calculate cycles per bit
bitperiod := (CLKFREQ / baudrate)
bit_mode := 7 + (bitperiod << 16)
' save parameters in local variables
rx_pin := rxpin
tx_pin := txpin
' set up output smartpin
pinf(txpin) ' force txpin to be an input
wrpin(txpin, _txmode)
wxpin(txpin, bit_mode)
pinh(txpin) ' force txpin to be an output
' now the input smartpin
pinf(rxpin)
wrpin(rxpin, _rxmode)
wxpin(rxpin, bit_mode)
pinh(rxpin) ' force rxpin to be an output
PUB tx(val) | txpin
txpin := tx_pin
wypin(txpin, val)
txflush
PUB txflush | txpin, z
txpin := tx_pin
repeat
z := pinr(txpin)
while z == 0
' check if byte received (never waits)
' returns -1 if no byte, otherwise byte
PUB rxcheck : rxbyte | rxpin, z
rxbyte := -1
rxpin := rx_pin
z := pinr(rxpin)
if z
rxbyte := rdpin(rxpin)>>24
' receive a byte (waits until one ready)
PUB rx : v
repeat
v := rxcheck
while v == -1
'' provide the usual str(), dec(), etc. routines
#include "spin/std_text_routines.spinh"
Eric, I'm using FLTL for PINF() and I would probably opt for PINL() to start the smart pin. I don't recall it mattering what state the OUT bit is in, though low seems cleaner. I could change PINF() to DIRL, which would be more expected, as it wouldn't clear the OUT bits. Maybe we need a PINO() for output that leave the OUT bits alone, too.
Your code looks like it would compile with my compiler, except for the .spinh file, of course.
I still need to make the smart pin helper instructions, but there's no challenge in those, so I've been working on the deeper stuff. The only challenge with the smart-pin-helper instructions will be figuring out what functionality best supports the smart pins.
Wouldn't it make sense to just have the basic smart pin hardware instructions (wrpin, wxpin, wypin, rdpin, rqpin, akpin) in the language itself, and do higher level smart pin operations (serial, SPI, AtoD, etc.) in objects? That way the source code to the higher level functionality is exposed to the user, and they can learn from it (and port it to other languages like python or C if they want to).
I still need to make the smart pin helper instructions, but there's no challenge in those, so I've been working on the deeper stuff. The only challenge with the smart-pin-helper instructions will be figuring out what functionality best supports the smart pins.
Wouldn't it make sense to just have the basic smart pin hardware instructions (wrpin, wxpin, wypin, rdpin, rqpin, akpin) in the language itself, and do higher level smart pin operations (serial, SPI, AtoD, etc.) in objects? That way the source code to the higher level functionality is exposed to the user, and they can learn from it (and port it to other languages like python or C if they want to).
Well, yeah! That's what I would do for me. I would like to not have the helpers, but I think there may be some simple things they could take of, like DAC and ADC readings. Actually, those would be really served by cog-resident PASM code that handles things on a live basis, like buffering serial, updating DACs automatically from variables, reading ADCs into variables in the background, etc. That more what's needed. I keep pondering the idea of the helper instructions and it keeps going nowhere. Maybe I'm too worn out whenever I take time to consider it. Or maybe it was just a bad idea. Less is more, often.
I just looked over the smart pin modes and, yes, there's not much that can be done that WRPIN/WXPIN/WYPIN/RDPIN wouldn't cover completely, already. I think all we need are some symbols to cover the various modes. That's good. Simpler is better.
What's going to really make the smart pins work is interrupt-driven PASM code that lives in the cog register space. Registers $000..$15A are currently free. There are 8 registers at $1D8..$1DF, named R0..R7, that the interpreter doesn't use, but has immediate variable-setup shortcuts for, so that Spin2 and PASM code can use those as efficient conduit, and/or in-line PASM can use them as scratchpad registers without needing to declare any special registers.
Thinking about this, helper instructions were never ideal. We just need symbols for the various modes. Maybe a helper to set up a DAC pin painlessly would be worthwhile, though.
Eric, I'm using FLTL for PINF() and I would probably opt for PINL() to start the smart pin. I don't recall it mattering what state the OUT bit is in, though low seems cleaner. I could change PINF() to DIRL, which would be more expected, as it wouldn't clear the OUT bits. Maybe we need a PINO() for output that leave the OUT bits alone, too.
Thanks. I had in my head that PINH() was good because for serial the pin is normally high, but of course it's the smart pin's job to do that, and maybe PINL() will make it less likely that anything will interfere with the smart pin.
Thinking about this, helper instructions were never ideal. We just need symbols for the various modes. Maybe a helper to set up a DAC pin painlessly would be worthwhile, though.
Perhaps you could just have some type of macro functionality as part of SPIN2 so we can put together wrapper macros that setup Smartpins the right way if there are certain sequences involved. It's probably fairly thin already anyway I would hope so we may not even need that, but perhaps for some people it might be convenient to hide some of the underneath stuff and just write one line and supply a pin etc.
Also, just looking through the code, this looks like it is going to be pretty fast with all your tight EXECF code sequences. Looking forward to it if I can bring myself to build up another Windows setup.
Thinking about this, helper instructions were never ideal. We just need symbols for the various modes. Maybe a helper to set up a DAC pin painlessly would be worthwhile, though.
Perhaps you could just have some type of macro functionality as part of SPIN2 so we can put together wrapper macros that setup Smartpins the right way if there are certain sequences involved. It's probably fairly thin already anyway I would hope so we may not even need that, but perhaps for some people it might be convenient to hide some of the underneath stuff and just write one line and supply a pin etc.
I think you could probably do this with objects, like:
obj smartpin: "SmartpinHelper"
...
smartpin.dac_setup(pin, whatever arguments we need...)
Not that macros are a bad idea -- I find them very useful myself -- but for this particular application I think we can get by with objects.
Comments
We don't have it, yet.
I'm not sure if the compiler could always figure out use cases. What about nested pointer calls?
This syntax could certainly be extended to something like: if we wanted to get into type checking on parameters. That is quite a can of worms, but on the other hand does add a lot to the safety of the language.
Ah, good point: in "fooptr(barptr())" we don't know how many parameters "fooptr" expects because we don't know how many "barptr" returns.
One brute force solution would be to disallow that (so indirect calls through method pointers cannot be used as parameters to method pointers). That's probably not too awful, it's not a situation that arises very often and it's always possible to work around it with some temporary variables.
A more complicated variation would be to allow it, but only if we can figure out the type in some other way, so: would be OK because the compiler can figure out that barptr returns two values before it needs to use it in the call to fooptr().
I guess theoretically the compiler could also try to trace back and figure out the signature of any methods that were assigned to the original pointers.
Question: can we write just"~~fooptr" instead of "~~fooptr:1(a,b)" because I think :0 and :1 are treated the same? If so this would be a nice default.
I think I like the last two best. Any other thoughts?
That would be really simple and not impose any restrictions on method pointer usage. The programmer had better get his parameters and results straight, though.
That sounds fine. I'm concerned about the indexed variants though: Do those assign a pointer to the index'd method after the given one? That seems like it has a lot of potential to go wrong. It also will make removing unused functions impossible, since we cannot know whether there might be an indirect reference to a method via an index.
If we leave that feature out it'll still be possible to create tables explicitly and call indirectly through the table: The advantage is that every method that's used must be explicitly mentioned in the code, either in a direct call or with an @ in front of it, so the unused ones can be removed.
Are f1 and f2 still legal in Spin2? If so that would be very nice, and also simplifies the method pointer syntax quite a bit, because there really is no 0 return case (every function returns a value, and the ones with no explicit result variable have a single implicit result named "RESULT"). So there's no need to write: we can default to a single return value and just write: That'll cover 99% of the use cases, and when the method pointer does point to a function with multiple return values we can still do:
I implemented the method-pointer syntax I proposed earlier:
I see how method indexes will cause code removal to be impossible. I just figured it would be handy to be able to have a range of methods which take and return the same number of parameters and results, but all do different operations. I would hate to force people to build method pointer tables to achieve the same thing when there are lots of methods involved. Maybe in your compiler, you could give a warning that a method index is inhibiting code-removal optimization.
In Spin2, methods may return NO results, at all, because results must be discretely named. There is no automatic RESULT return value. This happened when I added multiple return values, which now range from 0 to 15. So, your f1 and f2 examples are no longer legal in my Spin2.
I hate to give you any "bad" news, but there are only going to be a few things to adapt to, and they shouldn't be a big deal. I want to write a language description very soon. The only thing I have left to do in the compiler now is add the COGSPIN command which starts Spin2 programs. After that, I'll integrate the interpreter code into PNut.exe and we can start using it. Once it's running, there will probably be a few tweaks, but I think it ought to stabilize quickly, as it's pretty simple.
On the flip side, how often are the indexed method pointer computations going to be used in practice? And what are the potential ways the user can get them wrong? For example if they re-arrange any code, or get an index out of range? At least with an explicit method pointer table if they add a new method or re-arrange methods everything will still work. As a minor benefit, removing a feature makes documentation and testing simpler .
I'm just suggesting that if you do provide a default RESULT return value then "funcptr():0" and "funcptr():1" become the same thing, which really simplifies the syntax for indirect calls (they can both be just "funcptr()"). And it has the additional benefit of preserving Spin1 compatibility.
This actually could be a big deal, because it's legal to ignore return values, so it's difficult to know whether a serial tx method that's normally called by returns a value or not. If you do something like: how does fptr get called? Is it "fptr(c)" or "fptr(c):1"? What if some serial objects choose to return a value for tx (e.g. to indicate an error) and others don't? Then we can't mix pointers to their "tx" objects. I've actually run into a very similar problem when implementing the C library, because some C functions are "void" (return no values) and others are "int" (return an integer). It wasn't an issue for Spin1, and it'd be nice for Spin2 to avoid this problem.
The way stack variables are laid out in memory is like this:
This works out kind of nicely for in-line PASM, as the first 16 stack variables are copied to $1E0..$1EF and given the same temporary names for use by the PASM code. They are restored to the stack when the PASM code returns.
OK, I think we're coming at this from different angles. To explain where I'm coming from, suppose the compiler required every indirect call to have a ':results' suffix, even the ones with no results. (I realize that's not the way your compiler works now, but bear with me.) So the user would write: Now imagine a version of the compiler where if there's a call through a method pointer with no explicit ':results' then ':1' is assumed (again, I realize this is counter-factual, but I think it *could* be done). Then we would write:
Now my final point: if the ':results' modifier is ignored for instructions, then leaving off the ':0' on the first call shouldn't matter, so we could write: because 'f()' is used as an instruction, then it doesn't matter whether we write 'f():0' or 'f():1', right?
Otherwise, it seems to me that for the case of using 'g' as an instruction (it points to a function returning one value) then we'd always have to write "g():1".
When Spin2 is finished I can add a compatibility switch which will at least warn about the ':result' difference. The indexed method pointers I probably won't add -- they're a lot of work(*) and I think they're actually a negative gain (**).
(*) In fastspin a direct method call like "ser.tx(c)" gets assembled directly to "calla #__serialobj_tx" rather than going through a jump table. I could create a jump table internally for use by indexed method pointer lookup, and I will if it turns out that's a very popular feature, but until then there are other things that are a bigger bang for the buck for my time
(**) since they interfere with unused object removal
I've been coding for nearly 40 years, and I would NEVER use them. I would build a pointer table instead (where the index can be bounds checked, and rearranging functions in their source location doesn't break everything).
> I 100% agree that indexed method pointers are a negative. The cost of allowing really bad bugs and losing unused method removal are too great.
> I've been coding for nearly 40 years, and I would NEVER use them. I would build a pointer table instead (where the index can be bounds checked, and rearranging functions in their source location doesn't break everything).
I am almost convinced, myself. It sure is a handy little thing to be able to use in a pinch, though. It's kind of like we need some experimental modes that are not for production, but quick idea testing, only.
I realized the other day that HALF of the bytecodes were actually modal. They perform variable operations after the variable-setup is done from just a handful of other bytecodes. This led to making the variable-setup bytecodes end in '_RET_ SETQ #$160' to select the other set of modal bytecodes for the next XBYTE. This opened up a lot of bytecode space, giving 512 bytecodes total with no discrete switching bytecodes needed to select the set in use. The main bytecodes are all used, while the modal bytecodes are 50% used, with plenty of extra room for immediate bitfield selectors. There's not much more that these modal bytecodes need to do, after reading variables, writing variables, performing math/logic operations on variables, and setting up bitfields.
The Spin2 language seems pretty complete to me, but we'll need to use it to see where the shortcomings are. I'll start a Google Doc soon to document the language.
I still need to make the smart pin helper instructions, but there's no challenge in those, so I've been working on the deeper stuff. The only challenge with the smart-pin-helper instructions will be figuring out what functionality best supports the smart pins.
Tomorrow I want to get the flash loader working.
Aside from those things, I just need to do some cleanup in the PNut.exe menu bar. I've already simplified things, so that the compiler figures out if you've got an assembly program or a Spin2 program and works accordingly. No more different keys to get different list files and downloads. This slashes the complexity of the menus.
Here's the current interpreter. This has been a lot of fun to work on and very challenging. The code that does the Spin2 calls and returns was really difficult to work out and optimize. It took me days to spool it up if I had been away from it too long. In time, we'll find ways to improve it, I think.
If the SPIN interpreter was guaranteed to be the first code loaded after boot this wouldn't be a problem, but what if the interpreter is to be launched after other code is already running with the clock set to something other than the compiler supplied setting?
Wouldn't it make sense to load this code above the clock setting storage agreed by the other languages, and leave that space untouched, or at least update it for the benefit of other (non-SPIN) code?
If the interpreter needs to live elsewhere in memory, all those fixed locations can change, as well. The compiler will need to be able to generate code for different memory-use schemes. The compiled objects don't care, other than they were compiled with some automatic variables that would need to change. The interpreter will need to be reassembled, as well, in such a case, but that can become automatic. As it is, this code runs from $00000, just like a PASM program does. It clears $00000..$0003F for mailbox use, since boot code was originally there. At this point, I just want to make it work without too many other considerations. I'm letting it be perfect in its own sphere right now. Interoperation with other languages will come later.
I knew someone was going to notice that. It sure happened sooner than I thought it would.
Here's the smart pin serial object I'm using with my latest version of fastspin. I'm hoping it'll be compatible with your Spin2 compiler as well (except for the #include at the end, but that's not core to the functionality).
Your code looks like it would compile with my compiler, except for the .spinh file, of course.
Wouldn't it make sense to just have the basic smart pin hardware instructions (wrpin, wxpin, wypin, rdpin, rqpin, akpin) in the language itself, and do higher level smart pin operations (serial, SPI, AtoD, etc.) in objects? That way the source code to the higher level functionality is exposed to the user, and they can learn from it (and port it to other languages like python or C if they want to).
Well, yeah! That's what I would do for me. I would like to not have the helpers, but I think there may be some simple things they could take of, like DAC and ADC readings. Actually, those would be really served by cog-resident PASM code that handles things on a live basis, like buffering serial, updating DACs automatically from variables, reading ADCs into variables in the background, etc. That more what's needed. I keep pondering the idea of the helper instructions and it keeps going nowhere. Maybe I'm too worn out whenever I take time to consider it. Or maybe it was just a bad idea. Less is more, often.
What's going to really make the smart pins work is interrupt-driven PASM code that lives in the cog register space. Registers $000..$15A are currently free. There are 8 registers at $1D8..$1DF, named R0..R7, that the interpreter doesn't use, but has immediate variable-setup shortcuts for, so that Spin2 and PASM code can use those as efficient conduit, and/or in-line PASM can use them as scratchpad registers without needing to declare any special registers.
Thinking about this, helper instructions were never ideal. We just need symbols for the various modes. Maybe a helper to set up a DAC pin painlessly would be worthwhile, though.
Thanks. I had in my head that PINH() was good because for serial the pin is normally high, but of course it's the smart pin's job to do that, and maybe PINL() will make it less likely that anything will interfere with the smart pin.
Perhaps you could just have some type of macro functionality as part of SPIN2 so we can put together wrapper macros that setup Smartpins the right way if there are certain sequences involved. It's probably fairly thin already anyway I would hope so we may not even need that, but perhaps for some people it might be convenient to hide some of the underneath stuff and just write one line and supply a pin etc.
Also, just looking through the code, this looks like it is going to be pretty fast with all your tight EXECF code sequences. Looking forward to it if I can bring myself to build up another Windows setup.
I think you could probably do this with objects, like:
Not that macros are a bad idea -- I find them very useful myself -- but for this particular application I think we can get by with objects.