Hardware stack is:
* 22-bits wide (PC,C,Z)
* 8 levels deep
* No indication full or empty
8 level stack? That sounds like the PIC.
That stack is sufficient, though, for programs that run in the cog. If you have huge programs, you'll need to use CALLA/RETA/PUSHA/POPA, and/or the -B suffix versions of those instructions, so that the hub memory can be your huge stack.
When I got to the Propeller I knew nothing about microcontrollers. When Parallax released the P1 Verilog, I knew nothing about Verilog. I hate to read because it always seems that the writer is trying to explain something that I have no interest in.
But here I am. Spin is so simple that all a person has to do is look at some examples and start coding.
Verilog is not so simple, but studied in the context of a Propeller, it is VeriEZ.I love the syntax of ternary operators and thought from the beginning that Spin should have it. I also think concatenation should be in there.
Including Verilog like operators in Spin narrows the gap that a person like me has to jump to get from one to the other.
I think this is how object pointer syntax will work:
OBJ
b : "Bommerang" 'object with VARs instantiated (:)
w = "Whizbang" 'object with no VARs instantiated (=)
PUB go(VARptr)
b.method 'normal objects' VARs are instantiated
w[VARptr].method 'use a VAR pointer to work objects whose VARs are not instantiated
And method pointer syntax:
VAR long MyMethodPtr[3] 'a method pointer structure needs 3 longs (vbase,pbase,index)
long MyMethod 'handy to keep the address of the structure in a variable
PUB ReturnMethodPtr : MethodPtr
MyMethod := @MyMethodPtr 'Get a pointer you can pass around and use
MethodPtr(MyMethod, TheMethod) 'MethodPtr sets up method pointer structure
MethodPtr(@MyMethodPtr, TheMethod) '(same thing)
#MyMethod(Params) '# calls method
#@MyMethodPtr(Params) '(same thing)
return MyMethod 'address can be passed
return @MyMethodPtr '(same thing)
PRI TheMethod(Params) | LocalVars : ReturnVar
'stuff happens here
I've been thinking about this for days. I thought of making special VAR types for method pointers, but it seems to cause a net increase in syntax complexity. Using longs is still pretty compact and doesn't create any exceptions to thinking about how variables and pointers work.
VAR long MyMethodPtr[3] 'a method pointer structure needs 3 longs (vbase,pbase,index)
long MyMethod2Ptr[3] 'a method pointer structure needs 3 longs (vbase,pbase,index)
long MyMethod[2] 'handy to keep the address of the structure in a variable
PUB ReturnMethodPtr : MethodPtr
MyMethod[0] := @MyMethodPtr 'Get a pointer you can pass around and use
MyMethod[1] := @MyMethod2Ptr 'Get a pointer you can pass around and use
MethodPtr(MyMethod[0], TheMethod) 'MethodPtr sets up method pointer structure
MethodPtr(MyMethod[1], TheMethod2) 'MethodPtr sets up method pointer 2 structure
#MyMethod[0](Params) '# calls 1st method
#MyMethod[1](Params) '# calls 2nd method
return MyMethod[0] 'address can be passed
return MyMethod[1]
return MyMethod
OtherMethod(MyMethod) ' pass the array of pointers
PRI TheMethod(Params) | LocalVars : ReturnVar
'stuff happens here
PRI TheMethod2(Params) | LocalVars : ReturnVar
'stuff happens here
VAR long MyMethodPtr[3] 'a method pointer structure needs 3 longs (vbase,pbase,index)
long MyMethod2Ptr[3] 'a method pointer structure needs 3 longs (vbase,pbase,index)
long MyMethod[2] 'handy to keep the address of the structure in a variable
PUB ReturnMethodPtr : MethodPtr
MyMethod[0] := @MyMethodPtr 'Get a pointer you can pass around and use
MyMethod[1] := @MyMethod2Ptr 'Get a pointer you can pass around and use
MethodPtr(MyMethod[0], TheMethod) 'MethodPtr sets up method pointer structure
MethodPtr(MyMethod[1], TheMethod2) 'MethodPtr sets up method pointer 2 structure
#MyMethod[0](Params) '# calls 1st method
#MyMethod[1](Params) '# calls 2nd method
return MyMethod[0] 'address can be passed
return MyMethod[1]
return MyMethod
OtherMethod(MyMethod) ' pass the array of pointers
PRI TheMethod(Params) | LocalVars : ReturnVar
'stuff happens here
PRI TheMethod2(Params) | LocalVars : ReturnVar
'stuff happens here
Of course. All regular addressing/indexing syntax is in play. The # just says that we are going to get an address of a structure and use it to call a method.
Why 3 longs for a method pointer? Wouldn't it be more efficient to do the pbase + index lookup when the method pointer address calculation is performed, leaving us with two pointers: a pointer to the function (pbase[index]) and a pointer to the data (vbase)?
I'm asking because the whole vbase, pbase structure is an artifact of one particular implementation. fastspin does have something similar to vbase (a pointer to the object variable data) but there's no pbase. In fastspin calls to methods are done via direct jumps rather than indirect ones. It seems like you might want to eventually do that in the "regular" Spin compiler too. There's no need to do an indirect table lookup for most method dispatch. If the compiler knows the type of the object it can calculate the method address at compile time rather than run time, and save an indirection.
I've started to implement object pointers in fastspin, and had another thought. There may already a way to specify an object with no VARs instantiated. Doesn't
OBJ
w[0]: "whizbang"
do that?
I actually like the w = "whizzbang" syntax, so I'd be tempted to keep that as an alias for the w[0] form.
I've started to implement object pointers in fastspin, and had another thought. There may already a way to specify an object with no VARs instantiated. Doesn't
OBJ
w[0]: "whizbang"
do that?
I actually like the w = "whizzbang" syntax, so I'd be tempted to keep that as an alias for the w[0] form.
Eric
Yeah, that would work. I had thought about using [0] earlier, too, but I was wondering if it would make people think they'd have to put a [1] on a normal object. Either way is good, I suppose. The "=" implies that something certainly different than ":" is going on.
I couldn't remember what happens if you use a [0] in the PropTool, so I just tried it and it generates an error.
Why 3 longs for a method pointer? Wouldn't it be more efficient to do the pbase + index lookup when the method pointer address calculation is performed, leaving us with two pointers: a pointer to the function (pbase[index]) and a pointer to the data (vbase)?
I'm asking because the whole vbase, pbase structure is an artifact of one particular implementation. fastspin does have something similar to vbase (a pointer to the object variable data) but there's no pbase. In fastspin calls to methods are done via direct jumps rather than indirect ones. It seems like you might want to eventually do that in the "regular" Spin compiler too. There's no need to do an indirect table lookup for most method dispatch. If the compiler knows the type of the object it can calculate the method address at compile time rather than run time, and save an indirection.
Eric
When the method offset is looked up, there's also a value there that states how far to advance the d-pointer for local stack variables. I suppose the compiler could handle that, too.
A pointer to the function is not enough, since you need to also know the pbase address against which all DAT is relatively addressed, along with everything else that is static in the object. I might not be understanding your thinking, though.
A pointer to the function is not enough, since you need to also know the pbase address against which all DAT is relatively addressed, along with everything else that is static in the object. I might not be understanding your thinking, though.
The DAT can be addressed absolutely, since there's only one copy for all the objects. We're going to need to know the absolute addresses anyway in order to implement @. I guess for the bytecode you can save space in the opcodes by making it relative to pbase. For compiled PASM code though there's not really any point (it'll slow things down).
I guess we have pretty different implementation models, internally. Which is fine, but it'd be nice if the language definition wasn't quite so tied to any particular implementation model. That'll make it easier to change the internals in the future.
Which isn't a big deal here, of course -- fastspin can easily use the 3 element structure and just use 2 of the fields (pbase is basically always 0 for fastspin). But maybe we should make the structure 4 elements in case we think of some other use for it in the future?
Chip,
Is it possible that the 3 long method pointer stuff could be hidden away from the coder? Since at compile time you will know of all the MethodPtr() instances and can just create a table of all the 3 long structs and the compiled code would amount to assigning the address of an entry in that table to your "handy" VAR.
So the user just does this:
VAR
long MyMethod 'a variable to hold the address of the entry in the structure table
PUB ReturnMethodPtr : MethodPtr
MethodPtr(MyMethod, TheMethod) 'at compile MethodPtr sets up method pointer structure entry in a table, and assigns the address of that entry to MyMethod
#MyMethod(Params) '# calls method
return MyMethod 'address can be passed
PRI TheMethod(Params) | LocalVars : ReturnVar
'stuff happens here
I think this achieves what ersmith is asking for as well. Also, it's much cleaner/simpler for the coder.
Actually if the compiler is hiding the details we can do:
mytx := MethodPtr(fds.tx)
or even
mytx := @fds.tx
rather than
MethodPtr(mytx, fds.tx)
In all cases mytx would be set to the address of a 3 long block of memory containing the pbase and vbase of fds, and the index of tx; or in fastspin, a 2 long block of memory containing a pointer to the var data of fds and the address of FullDuplexSerial_tx.
I love these ideas, but I think that (vbase,pbase,index) structure would need to go into the VAR section, wouldn't it, so it's unique to that instance? And what about dynamic assignment of any number of method pointers? Those can't be known at compile time.
Making a 4-long type of VAR ("BLOB"?) to serve as a structure for this kind of thing would be fine. Maybe the first long could be the pointer variable and take the given name, itself.
Anyway, the shortened and simplified syntax is awesome looking. Very easy to understand. Just how to do it, is my question.
I guess we have pretty different implementation models, internally. Which is fine, but it'd be nice if the language definition wasn't quite so tied to any particular implementation model. That'll make it easier to change the internals in the future.Eric
Agreed. I realized, and I must have known this before, that the telescopic nature of objects in bytecode Spin gets around the need for putting something like a pbase pointer into a VAR section. Because a child object's VAR section always starts at some offset of the parent's VAR section, and that is built into the static pbase code, there is some simplification. There is a lot of implied locating of things. But, it would be good to fashion the language, as you said, so it's not implementation-dependent.
By making the method-pointer structure 4 longs, a lot of good things happen:
1) The first long becomes the pointer to the 3-long structure.
2) The symbolic name gets assigned to the first long, making the pointer.
VAR long MyMethod[4] 'a method-pointer structure needs 4 longs (pointer to next long,vbase,pbase,index)
PUB ReturnMethodPtr : MethodPtr
MethodPtr(MyMethod, TheMethod) 'MethodPtr sets up method pointer structure
#MyMethod(Params) '# calls method
return MyMethod 'address can be passed
PRI TheMethod(Params) | LocalVars : ReturnVar
'stuff happens here
This allows for dynamic reassignment of method pointers, which could be important.
Also, we could have a "BLOB" VAR type which just registers as 4 longs:
Chip,
I guess my implementation idea doesn't work for having a method pointer into a dynamically instantiated object. Hmm... It would work for non-dynamically instantiated objects, but that's not good enough.
I prefer the single "method pointer" blob thing, instead of having two pieces you have to connect together. Also, you need to be able to pass a method pointer as a parameter to a method (local or in another object).
Ersmith, I like your simplified method pointer assignment. By using an MPTR type of VAR (like "BLOB", but a more appropriate name), the compiler could easily key off an assignment and know what to do. Plus, [index] could be used when declaring and assigning method pointers.
I like something like this:
VAR MPTR MyMethod 'a method-pointer structure is 4 longs (pointer to next long, vbase, pbase, index)
PUB ReturnMethodPtr : MethodPtr
MyMethod := TheMethod 'sets up method-pointer structure
#MyMethod(Params) '# calls method
return MyMethod 'address can be passed
PRI TheMethod(Params) | LocalVars : ReturnVar
'stuff happens here
If the compiler sees an MPTR type in a method, it could check for a ":=" following it. If found, it knows an assignment is being made. If not, it means execute the method pointed to.
VAR MPTR MyMethod '4 longs (pointer to next long, vbase, pbase, index)
MPTR MyMethods[10] 'an array of method pointers
PUB ReturnMethodPtr : MethodPtr
MyMethod := TheMethod 'sets up method-pointer structure
MyMethod(Params) 'calls method if MPTR type
MyMethods[2](Params) 'calls method if MPTR type
#Address(Params) 'calls method, address points to 3-long structure
return MyMethod 'address can be passed
PRI TheMethod(Params) | LocalVars : ReturnVar
'stuff happens here
Rather than a new 4 long MPTR type I think a 2 long DLONG type would be more generally useful. If we're going to add a new type that would be the first one I would add.
Surely there's some way we can cram all the information needed into 64 bits? In fastspin it's trivial (one word is the vbase, the other word is the absolute address of the method to call). Could the Spin interpreter do the pbase lookup at the method pointer calculation spot, and use that? Alternatively, could pbase + index be put into 22 bits + 10 bits? That would allow for 4 MB of object code and 1024 methods per object.
Another option would be to restrict methodptr calls so that they can only operate on objects whose addresses are known at compile time. That way we could stick with Roy's original scheme of having the compiler allocate a static array for the method pointers and using a pointer to that static array. For that matter we could allocate that method pointer array on the stack; that would preclude returning it from methods, but would allow passing arbitrary object/method combinations as parameters.
I think the most common use for method pointers is as parameters to functions that want to operate on "generic" objects.
MyMethod := TheMethod 'sets up method-pointer structure
In Spin1, that actually calls TheMethod instead of returning a pointer to it. And if TheMethod has a parameter list, it will cause a compile error, as it should. Why not:
MyMethod := @TheMethod 'sets up method-pointer structure
That's unambiguous, in that it does not intersect with any other semantics.
MyMethod := TheMethod 'sets up method-pointer structure
In Spin1, that actually calls TheMethod instead of returning a pointer to it. And if TheMethod has a parameter list, it will cause a compile error, as it should. Why not:
MyMethod := @TheMethod 'sets up method-pointer structure
That's unambiguous, in that it does not intersect with any other semantics.
Rather than a new 4 long MPTR type I think a 2 long DLONG type would be more generally useful. If we're going to add a new type that would be the first one I would add.
Surely there's some way we can cram all the information needed into 64 bits? In fastspin it's trivial (one word is the vbase, the other word is the absolute address of the method to call). Could the Spin interpreter do the pbase lookup at the method pointer calculation spot, and use that? Alternatively, could pbase + index be put into 22 bits + 10 bits? That would allow for 4 MB of object code and 1024 methods per object.
Eric
We could absolutely pack stuff tight into two longs. I've just been keeping things open to a full 32-bit memory model, not a 20-bit one.
Another option would be to restrict methodptr calls so that they can only operate on objects whose addresses are known at compile time. That way we could stick with Roy's original scheme of having the compiler allocate a static array for the method pointers and using a pointer to that static array. For that matter we could allocate that method pointer array on the stack; that would preclude returning it from methods, but would allow passing arbitrary object/method combinations as parameters.
I think the most common use for method pointers is as parameters to functions that want to operate on "generic" objects.
Eric
I kind of like allowing things to be dynamic. If they were static, parameter counts could be checked.
I don't like the idea of packing everything down to just enough bits for this version of the chip. Especially if things go well and you guys to a line of chips based on this design, we could very well make one with more memory and then the packed version would fail on there.
Also, I agree with Chip about allowing dynamic object method pointers.
In thinking about method pointers, it keeps coming up that if we had proper structures, things like this would be much simpler. The trouble is, there needs to be some special syntax for dealing with structures. A method could return a whole structure, for example, and an assignment could be made to a whole structure. Structures would need to be able to exist in local variable space, as well. That structures might contain longs, words, and bytes, all at once, throws another complexity in. Anyway, if we could get structure syntax worked out, things like method pointers would become simpler. Returning complex numbers would become simpler, too. It's like a big step that's hard to think about.
If we don't resolve structure syntax, Spin2 will get littered with little caveats to support what might as well be declared structures.
I don't like the idea of packing everything down to just enough bits for this version of the chip. Especially if things go well and you guys to a line of chips based on this design, we could very well make one with more memory and then the packed version would fail on there.
Also, I agree with Chip about allowing dynamic object method pointers.
I suppose, in a full application, there might only be a dozen indirect method pointers in use. To keep everything 4GB-capable doesn't cost that much. Plus, it keeps things simple. Maybe Spin2 could be made to run on other machines later.
Comments
8 level stack? That sounds like the PIC.
That stack is sufficient, though, for programs that run in the cog. If you have huge programs, you'll need to use CALLA/RETA/PUSHA/POPA, and/or the -B suffix versions of those instructions, so that the hub memory can be your huge stack.
But here I am. Spin is so simple that all a person has to do is look at some examples and start coding.
Verilog is not so simple, but studied in the context of a Propeller, it is VeriEZ.I love the syntax of ternary operators and thought from the beginning that Spin should have it. I also think concatenation should be in there.
Including Verilog like operators in Spin narrows the gap that a person like me has to jump to get from one to the other.
And method pointer syntax:
I've been thinking about this for days. I thought of making special VAR types for method pointers, but it seems to cause a net increase in syntax complexity. Using longs is still pretty compact and doesn't create any exceptions to thinking about how variables and pointers work.
Can this work in your setup?
Of course. All regular addressing/indexing syntax is in play. The # just says that we are going to get an address of a structure and use it to call a method.
Why 3 longs for a method pointer? Wouldn't it be more efficient to do the pbase + index lookup when the method pointer address calculation is performed, leaving us with two pointers: a pointer to the function (pbase[index]) and a pointer to the data (vbase)?
I'm asking because the whole vbase, pbase structure is an artifact of one particular implementation. fastspin does have something similar to vbase (a pointer to the object variable data) but there's no pbase. In fastspin calls to methods are done via direct jumps rather than indirect ones. It seems like you might want to eventually do that in the "regular" Spin compiler too. There's no need to do an indirect table lookup for most method dispatch. If the compiler knows the type of the object it can calculate the method address at compile time rather than run time, and save an indirection.
Eric
I actually like the w = "whizzbang" syntax, so I'd be tempted to keep that as an alias for the w[0] form.
Eric
Yeah, that would work. I had thought about using [0] earlier, too, but I was wondering if it would make people think they'd have to put a [1] on a normal object. Either way is good, I suppose. The "=" implies that something certainly different than ":" is going on.
I couldn't remember what happens if you use a [0] in the PropTool, so I just tried it and it generates an error.
When the method offset is looked up, there's also a value there that states how far to advance the d-pointer for local stack variables. I suppose the compiler could handle that, too.
A pointer to the function is not enough, since you need to also know the pbase address against which all DAT is relatively addressed, along with everything else that is static in the object. I might not be understanding your thinking, though.
The DAT can be addressed absolutely, since there's only one copy for all the objects. We're going to need to know the absolute addresses anyway in order to implement @. I guess for the bytecode you can save space in the opcodes by making it relative to pbase. For compiled PASM code though there's not really any point (it'll slow things down).
I guess we have pretty different implementation models, internally. Which is fine, but it'd be nice if the language definition wasn't quite so tied to any particular implementation model. That'll make it easier to change the internals in the future.
Which isn't a big deal here, of course -- fastspin can easily use the 3 element structure and just use 2 of the fields (pbase is basically always 0 for fastspin). But maybe we should make the structure 4 elements in case we think of some other use for it in the future?
Eric
Is it possible that the 3 long method pointer stuff could be hidden away from the coder? Since at compile time you will know of all the MethodPtr() instances and can just create a table of all the 3 long structs and the compiled code would amount to assigning the address of an entry in that table to your "handy" VAR.
So the user just does this:
I think this achieves what ersmith is asking for as well. Also, it's much cleaner/simpler for the coder.
We also need to be able to apply MethodPtr to objects other than the current one. I think sometimes we'd like to do something like:
Eric
Making a 4-long type of VAR ("BLOB"?) to serve as a structure for this kind of thing would be fine. Maybe the first long could be the pointer variable and take the given name, itself.
Anyway, the shortened and simplified syntax is awesome looking. Very easy to understand. Just how to do it, is my question.
Agreed. I realized, and I must have known this before, that the telescopic nature of objects in bytecode Spin gets around the need for putting something like a pbase pointer into a VAR section. Because a child object's VAR section always starts at some offset of the parent's VAR section, and that is built into the static pbase code, there is some simplification. There is a lot of implied locating of things. But, it would be good to fashion the language, as you said, so it's not implementation-dependent.
1) The first long becomes the pointer to the 3-long structure.
2) The symbolic name gets assigned to the first long, making the pointer.
This allows for dynamic reassignment of method pointers, which could be important.
Also, we could have a "BLOB" VAR type which just registers as 4 longs:
I guess my implementation idea doesn't work for having a method pointer into a dynamically instantiated object. Hmm... It would work for non-dynamically instantiated objects, but that's not good enough.
I prefer the single "method pointer" blob thing, instead of having two pieces you have to connect together. Also, you need to be able to pass a method pointer as a parameter to a method (local or in another object).
I like something like this:
If the compiler sees an MPTR type in a method, it could check for a ":=" following it. If found, it knows an assignment is being made. If not, it means execute the method pointed to.
Surely there's some way we can cram all the information needed into 64 bits? In fastspin it's trivial (one word is the vbase, the other word is the absolute address of the method to call). Could the Spin interpreter do the pbase lookup at the method pointer calculation spot, and use that? Alternatively, could pbase + index be put into 22 bits + 10 bits? That would allow for 4 MB of object code and 1024 methods per object.
Eric
I think the most common use for method pointers is as parameters to functions that want to operate on "generic" objects.
Eric
That's unambiguous, in that it does not intersect with any other semantics.
-Phil
I agree.
We could absolutely pack stuff tight into two longs. I've just been keeping things open to a full 32-bit memory model, not a 20-bit one.
I kind of like allowing things to be dynamic. If they were static, parameter counts could be checked.
Also, I agree with Chip about allowing dynamic object method pointers.
If we don't resolve structure syntax, Spin2 will get littered with little caveats to support what might as well be declared structures.
I suppose, in a full application, there might only be a dozen indirect method pointers in use. To keep everything 4GB-capable doesn't cost that much. Plus, it keeps things simple. Maybe Spin2 could be made to run on other machines later.