They would be like method pointers, minus the 12-bit method index. They would just need the VAR base and code base of an object. Also, maybe an association to an object should be part of the object pointer variable declaration, in order to reduce syntactic clutter during usage.
Maybe it could all work like this:
OBJ objtype = "objfile"
VAR ^objtype t
PUB ......
t := @realobject
t.method()
x := t.consymbol * 3
@cgracey said:
Thinking more about object pointers...
They would be like method pointers, minus the 12-bit method index. They would just need the VAR base and code base of an object. Also, maybe an association to an object should be part of the object pointer variable declaration, in order to reduce syntactic clutter during usage.
Maybe it could all work like this:
OBJ objtype = "objfile"
VAR ^objtype t
PUB ......
t := @realobject
t.method()
x := t.consymbol * 3
Interesting. Where is this "realobject" defined? In objfile somewhere? I like the idea of being able to dynamically select the executed code/data based on a dynamic object pointer's value rather than a static association - it could be useful for my memory driver abstraction and it is heading a bit closer to actual OOP. Although ideally we want the object pointer to be assignable to one of several instances of the same base object type (class) with different implementations for each subclass (if you want some inheritance+polymorphism).
@cgracey said:
Thinking more about object pointers...
They would be like method pointers, minus the 12-bit method index. They would just need the VAR base and code base of an object. Also, maybe an association to an object should be part of the object pointer variable declaration, in order to reduce syntactic clutter during usage.
Maybe it could all work like this:
OBJ objtype = "objfile"
VAR ^objtype t
PUB ......
t := @realobject
t.method()
x := t.consymbol * 3
This is exatly what I would want.
By the way, I begin to rewrite my programs using structs and find that I can not access structure definitions in an obj from the parent obj.
We can access constants definition with obj.CONST_NAME
But not access structure definition with obj.STRUCT_NAME
Maybe it is better to have a #include for the preprocessor.
So we can write alle structure definition and global constants in one File and #include this in every object that needs that definitions.
@cgracey said:
Thinking more about object pointers...
They would be like method pointers, minus the 12-bit method index. They would just need the VAR base and code base of an object. Also, maybe an association to an object should be part of the object pointer variable declaration, in order to reduce syntactic clutter during usage.
Maybe it could all work like this:
OBJ objtype = "objfile"
VAR ^objtype t
PUB ......
t := @realobject
t.method()
x := t.consymbol * 3
Interesting. Where is this "realobject" defined? In objfile somewhere? I like the idea of being able to dynamically select the executed code/data based on a dynamic object pointer's value rather than a static association - it could be useful for my memory driver abstraction and it is heading a bit closer to actual OOP. Although ideally we want the object pointer to be assignable to one of several instances of the same base object type (class) with different implementations for each subclass (if you want some inheritance+polymorphism).
The "realobject" could come from anywhere. It would just need to exist in memory and be the same type of object.
@cgracey said:
Thinking more about object pointers...
They would be like method pointers, minus the 12-bit method index. They would just need the VAR base and code base of an object. Also, maybe an association to an object should be part of the object pointer variable declaration, in order to reduce syntactic clutter during usage.
Maybe it could all work like this:
OBJ objtype = "objfile"
VAR ^objtype t
PUB ......
t := @realobject
t.method()
x := t.consymbol * 3
This is exatly what I would want.
By the way, I begin to rewrite my programs using structs and find that I can not access structure definitions in an obj from the parent obj.
We can access constants definition with obj.CONST_NAME
But not access structure definition with obj.STRUCT_NAME
Maybe it is better to have a #include for the preprocessor.
So we can write alle structure definition and global constants in one File and #include this in every object that needs that definitions.
I need to make child-object STRUCT definitions readable by the parent. I will look into this today.
#INCLUDE would be good, but it would need to be implemented in a way where it wouldn't interfere with source file offsets for error reporting.
Perhaps Chip's concern is the line number reporting during compilation, not so much execution, and how potentially nested include files would upset that. Also the #ifdef code could well too depending on how/when that is done, although I believe they are just being replaced with blanks IIRC so original line numbers should still be countable.
I imagine some type of stack structure that tracks actual line numbers per currently included file might be useful if it could refer to the original source file in the nested group and current line number of the #include line. The line numbers get restarted at 1 for each new file the compiler includes and are restored back to the original line number of the "parent" source file when each include file ends, which then continues incrementing. Of course there still may be problems I don't understand as I might be missing how/when the compiler uses the line numbers if multiple passes are done and some of this information is not present in later passes. For that you may need to map a global line number to an original file/line number for example.
This is a common and solvable problem for compiler error reporting. For example GCC reports errors found in include files using a hierarchy where each included file and related line number is known and displayed like this:
In file included from /usr/include/stdio.h:28:0,
from ../.././gcc-4.7.0/libgcc/../gcc/tsystem.h:88,
from ../.././gcc-4.7.0/libgcc/libgcc2.c:29:
/usr/include/features.h:324:26: fatal error: bits/predefs.h: No such file or directory
compilation terminated.
Perhaps Chip's concern is the line number reporting during compilation, not so much execution, and how potentially nested include files would upset that.
That's exactly what I was talking about. Typically a preprocessor is implemented as a first pass, before any other part of the compiler, that replaces macros with their definitions, removes code that fails #ifdef tests, and substitutes the contents of a #include file for the #include line. So code that starts off looking like:
CON
#ifdef NOT_DEFINED
#include "a.defs"
#else
#include "b.defs"
#endif
DAT
byte "this is the original spin2"
...
will after preprocessing look something like:
CON
#line 1 "b.defs"
'' contents of file b.defs
MyVal = "B"
#line 6 "orig.spin2"
DAT
byte "this is the original spin2"
The preprocessor deletes lines in #ifdef as appropriate, replacing them with blank lines to keep the line count straight. But around #include it inserts #line directives so the subsequent compiler passes can figure out what lines to give for errors. You can see this kind of thing in action if you run a stand-alone C preprocessor (like gcc -E). #line are the only directives that the rest of the compiler has to deal with, and they're pretty simple.
Some modern compilers fold the preprocessing into other passes, so they no longer have a seperate preprocessor. That's OK too, but means you need to keep track of the line numbers/file names in another way.
Ok gotcha @ersmith . I see how the #line directive is used in the final output now and how it tracks the source input file positions. The Microsoft example posted had only showed it used for output during execution which is commonly used too in C, but this embedded source information can also be used to track files and line numbers during compilation. Hopefully Chip could try to follow something like that.
@cgracey said:
Thinking more about object pointers...
They would be like method pointers, minus the 12-bit method index. They would just need the VAR base and code base of an object. Also, maybe an association to an object should be part of the object pointer variable declaration, in order to reduce syntactic clutter during usage.
Maybe it could all work like this:
OBJ objtype = "objfile"
VAR ^objtype t
PUB ......
t := @realobject
t.method()
x := t.consymbol * 3
This is exatly what I would want.
By the way, I begin to rewrite my programs using structs and find that I can not access structure definitions in an obj from the parent obj.
We can access constants definition with obj.CONST_NAME
But not access structure definition with obj.STRUCT_NAME
Maybe it is better to have a #include for the preprocessor.
So we can write alle structure definition and global constants in one File and #include this in every object that needs that definitions.
I've got objects outputting their STRUCTs now, and parent objects are fully receiving them into their context, but I still need to implement the syntax handling in the compiler, so you'll be able to do 'object.struct'. That part should be much easier to implement. I think I've got the hard parts all done.
CON STRUCT StructX(Object.StructA x[10]) 'StructX is ten StructA's, gets exported
CON STRUCT StructY = Object.StructA 'StructY is a copy of StructA, gets exported
VAR Object.StructA StructJ 'StructJ is an instance of StructA
VAR ^Object.StructA StructK 'StructK is a pointer to StructA
PUB Name(^Object.StructA StructL) 'StructL is a pointer to StructA
DAT StructM Object.StructA 'StructM is an instance of StructA
This took a long time to work out, because I started out keeping STRUCT definitions all isolated, but when exporting to the parent, things got hairy due to all the interdependencies. Now, when STRUCTs are defined that involve other STRUCTs, the other STRUCT definition is added in, instead of being referenced. This simplified many things. It also enabled STRUCTs to be exported upwards using "=" syntax (see 2nd line above).
I found and fixed a bug in the SmoothLine routine that the DEBUG displays use. I had optimized the routine in v44, but didn't realize that I needed to make two variables into 64-bit integers to avoid overflow. The bug caused lines whose slope was near 1 to be drawn in the wrong direction with a vertical or horizontal segment added on.
@cgracey
Just to clarify, Object is the symbol assigned to a child object? i.e., you're getting structure definitions from an external file? And by 'gets exported' I'm assuming you mean it's available for use in creating an instance, like the VAR, PUB and DAT examples?
@avsa242 said:
@cgracey
Just to clarify, Object is the symbol assigned to a child object? i.e., you're getting structure definitions from an external file? And by 'gets exported' I'm assuming you mean it's available for use in creating an instance, like the VAR, PUB and DAT examples?
We can use the STRUCTure definitions from a child object by using the syntax: object.structname.
Any STRUCT that we declare, in turn, is available to any parent object in the future. We can pass structures from the lowest object all the way to the highest by doing this in every object in the chain: CON STRUCT structname = object.structname.
I posted a new PNut_v50 at the top of this thread.
PNut_v50 has several new features and fixes a bug introduced in PNut_v49 that caused structure sizes to be wrong. Here's what's new in v50:
New DEBUG PLOT commands allowing up to 8 bitmap layers that you can selectively copy into the PLOT window. This is useful for doing photo-realistic displays, where pre-drawn images are copied into the plot window to show, say, a toggle switch being in an ON or OFF position.
New DITTO directive for DAT blocks and inline PASM sections, which can iteratively generate code.
ORGH is now available for inline PASM code in PUB/PRI methods. ORGH has the same usage syntax as ORG, but executes PASM code in-place from hub memory, without loading it into register space.
New @\"string" method is like @"string", but allows escape characters like \n (new line, 10) and \xFF ($FF).
Predefined registers, like PR0, DIRA, OUTA, and INA, can now be used in CON block expressions.
PASM DEBUG instructions can now be expressed with a conditional prefix, like 'IF_C DEBUG'. This is accomplished by placing an opposite-condition 'SKIP #1' before the DEBUG (BRK) instruction.
PASM DEBUG instructions can now be expressed with a conditional prefix, like 'IF_C DEBUG'. This is accomplished by placing an opposite-condition 'SKIP #1' before the DEBUG (BRK) instruction.
But doesn't that mess up the SKIP state? i.e. if skipping is paused during a subroutine call and the callee ends up doing this
PASM DEBUG instructions can now be expressed with a conditional prefix, like 'IF_C DEBUG'. This is accomplished by placing an opposite-condition 'SKIP #1' before the DEBUG (BRK) instruction.
But doesn't that mess up the SKIP state? i.e. if skipping is paused during a subroutine call and the callee ends up doing this
Yes. I will need to make a warning about that. Can you think of a better way to do it?
PASM DEBUG instructions can now be expressed with a conditional prefix, like 'IF_C DEBUG'. This is accomplished by placing an opposite-condition 'SKIP #1' before the DEBUG (BRK) instruction.
But doesn't that mess up the SKIP state? i.e. if skipping is paused during a subroutine call and the callee ends up doing this
Yes. I will need to make a warning about that. Can you think of a better way to do it?
@evanh said:
Huh? What's the history to BRK not obeying EEEE encoded bits?
I wanted to ask the same question. What happened to the EEEE bits for these instructions? Are they being used in some other way or cannot be used for some specific reason?
The P2 instruction spreadsheet does mention this for BRK:
"If in debug ISR, set next break condition to D. Else, set BRK code to D[7:0] and unconditionally trigger BRK interrupt, if enabled."
BRK gets detected very early in the pipeline. It is this way because things have to start happening early in order for eveything to get done on time. BRK Is is the only instruction with this constraint.
@cgracey said:
BRK gets detected very early in the pipeline. It is this way because things have to start happening early in order for eveything to get done on time. BRK Is is the only instruction with this constraint.
Thanks Chip. Makes more sense now as to why it ended up being different to all the other instructions.
For the skipping of the debug in the conditional case to avoid breaking SKIPF code being debugged, could you do some patching of the next instruction to become a NOP by using a prior conditional ALTI changing the instruction field to zeroes? This might require one additional long somewhere to hold some upper zero bits, or alternatively patching it with some other innocuous instruction or something with EEEE bits that are the opposite of the condition you want. Are any of the reserved registers suitable for use in ALTI that could already return zeroes for us here when referenced as a D field, like INA & INB perhaps, or would they still return valid input pin data?
e.g.
if_c DEBUG("debug message")
becomes
if_c ALTI zero,#%101_000_000 ' modify next instruction field
BRK #code
Another way I thought of is still to break but use a special reserved break code value (all zeroes or all ones?) with a preceding conditional ALTS that is detected and returns immediately without incurring so much overhead as all the other "normal" codes. The former approach is preferable however, for higher performance. JMPREL is good but it is slower for HUBEXEC and will also interfere with a REP loop.
Comments
Thinking more about object pointers...
They would be like method pointers, minus the 12-bit method index. They would just need the VAR base and code base of an object. Also, maybe an association to an object should be part of the object pointer variable declaration, in order to reduce syntactic clutter during usage.
Maybe it could all work like this:
Does that work for methods with parameters?
Yes, I just gave the simplest example.
Ideally make sure there's compatibility with flexspin's version of the feature (that also uses the OBJ equals syntax)
Interesting. Where is this "realobject" defined? In objfile somewhere? I like the idea of being able to dynamically select the executed code/data based on a dynamic object pointer's value rather than a static association - it could be useful for my memory driver abstraction and it is heading a bit closer to actual OOP. Although ideally we want the object pointer to be assignable to one of several instances of the same base object type (class) with different implementations for each subclass (if you want some inheritance+polymorphism).
Okay, found the detail in the docs: when using the method pointer you have to explicitly declare the number of return values if !0.
This is exatly what I would want.
By the way, I begin to rewrite my programs using structs and find that I can not access structure definitions in an obj from the parent obj.
We can access constants definition with obj.CONST_NAME
But not access structure definition with obj.STRUCT_NAME
Maybe it is better to have a #include for the preprocessor.
So we can write alle structure definition and global constants in one File and #include this in every object that needs that definitions.
The "realobject" could come from anywhere. It would just need to exist in memory and be the same type of object.
I need to make child-object STRUCT definitions readable by the parent. I will look into this today.
#INCLUDE would be good, but it would need to be implemented in a way where it wouldn't interfere with source file offsets for error reporting.
Most preprocessors deal with this by having a
#line
directive which can reset the file name and line number for error reporting, and then inserting appropriate#line
during and after#include
. See e.g. https://learn.microsoft.com/en-us/cpp/preprocessor/hash-line-directive-c-cpp?view=msvc-170Perhaps Chip's concern is the line number reporting during compilation, not so much execution, and how potentially nested include files would upset that. Also the #ifdef code could well too depending on how/when that is done, although I believe they are just being replaced with blanks IIRC so original line numbers should still be countable.
I imagine some type of stack structure that tracks actual line numbers per currently included file might be useful if it could refer to the original source file in the nested group and current line number of the #include line. The line numbers get restarted at 1 for each new file the compiler includes and are restored back to the original line number of the "parent" source file when each include file ends, which then continues incrementing. Of course there still may be problems I don't understand as I might be missing how/when the compiler uses the line numbers if multiple passes are done and some of this information is not present in later passes. For that you may need to map a global line number to an original file/line number for example.
This is a common and solvable problem for compiler error reporting. For example GCC reports errors found in include files using a hierarchy where each included file and related line number is known and displayed like this:
That's exactly what I was talking about. Typically a preprocessor is implemented as a first pass, before any other part of the compiler, that replaces macros with their definitions, removes code that fails
#ifdef
tests, and substitutes the contents of a#include
file for the#include
line. So code that starts off looking like:will after preprocessing look something like:
The preprocessor deletes lines in
#ifdef
as appropriate, replacing them with blank lines to keep the line count straight. But around#include
it inserts#line
directives so the subsequent compiler passes can figure out what lines to give for errors. You can see this kind of thing in action if you run a stand-alone C preprocessor (likegcc -E
).#line
are the only directives that the rest of the compiler has to deal with, and they're pretty simple.Some modern compilers fold the preprocessing into other passes, so they no longer have a seperate preprocessor. That's OK too, but means you need to keep track of the line numbers/file names in another way.
Ok gotcha @ersmith . I see how the #line directive is used in the final output now and how it tracks the source input file positions. The Microsoft example posted had only showed it used for output during execution which is commonly used too in C, but this embedded source information can also be used to track files and line numbers during compilation. Hopefully Chip could try to follow something like that.
I've got objects outputting their STRUCTs now, and parent objects are fully receiving them into their context, but I still need to implement the syntax handling in the compiler, so you'll be able to do 'object.struct'. That part should be much easier to implement. I think I've got the hard parts all done.
I posted a new PNut_v49 which exports CON STRUCT's, just like CON integers and floats have always been.
https://obex.parallax.com/obex/pnut-spin2-latest-version/
This took a long time to work out, because I started out keeping STRUCT definitions all isolated, but when exporting to the parent, things got hairy due to all the interdependencies. Now, when STRUCTs are defined that involve other STRUCTs, the other STRUCT definition is added in, instead of being referenced. This simplified many things. It also enabled STRUCTs to be exported upwards using "=" syntax (see 2nd line above).
I found and fixed a bug in the SmoothLine routine that the DEBUG displays use. I had optimized the routine in v44, but didn't realize that I needed to make two variables into 64-bit integers to avoid overflow. The bug caused lines whose slope was near 1 to be drawn in the wrong direction with a vertical or horizontal segment added on.
[haven't been reading]
@cgracey
Just to clarify,
Object
is the symbol assigned to a child object? i.e., you're getting structure definitions from an external file? And by 'gets exported' I'm assuming you mean it's available for use in creating an instance, like the VAR, PUB and DAT examples?We can use the STRUCTure definitions from a child object by using the syntax: object.structname.
Any STRUCT that we declare, in turn, is available to any parent object in the future. We can pass structures from the lowest object all the way to the highest by doing this in every object in the chain: CON STRUCT structname = object.structname.
I posted a new PNut_v50 at the top of this thread.
PNut_v50 has several new features and fixes a bug introduced in PNut_v49 that caused structure sizes to be wrong. Here's what's new in v50:
New DEBUG PLOT commands allowing up to 8 bitmap layers that you can selectively copy into the PLOT window. This is useful for doing photo-realistic displays, where pre-drawn images are copied into the plot window to show, say, a toggle switch being in an ON or OFF position.
New DITTO directive for DAT blocks and inline PASM sections, which can iteratively generate code.
ORGH is now available for inline PASM code in PUB/PRI methods. ORGH has the same usage syntax as ORG, but executes PASM code in-place from hub memory, without loading it into register space.
New @\"string" method is like @"string", but allows escape characters like \n (new line, 10) and \xFF ($FF).
Predefined registers, like PR0, DIRA, OUTA, and INA, can now be used in CON block expressions.
PASM DEBUG instructions can now be expressed with a conditional prefix, like 'IF_C DEBUG'. This is accomplished by placing an opposite-condition 'SKIP #1' before the DEBUG (BRK) instruction.
But doesn't that mess up the SKIP state? i.e. if skipping is paused during a subroutine call and the callee ends up doing this
Yes. I will need to make a warning about that. Can you think of a better way to do it?
JMPREL #1
?Huh? What's the history to BRK not obeying EEEE encoded bits?
I wanted to ask the same question. What happened to the EEEE bits for these instructions? Are they being used in some other way or cannot be used for some specific reason?
The P2 instruction spreadsheet does mention this for BRK:
BRK gets detected very early in the pipeline. It is this way because things have to start happening early in order for eveything to get done on time. BRK Is is the only instruction with this constraint.
I presume then, that there is a distinct pipeline difference between a programmed branch and an IRQ?
On an IRQ, a JMP is fed into the pipeline.
Thanks Chip. Makes more sense now as to why it ended up being different to all the other instructions.
For the skipping of the debug in the conditional case to avoid breaking SKIPF code being debugged, could you do some patching of the next instruction to become a NOP by using a prior conditional ALTI changing the instruction field to zeroes? This might require one additional long somewhere to hold some upper zero bits, or alternatively patching it with some other innocuous instruction or something with EEEE bits that are the opposite of the condition you want. Are any of the reserved registers suitable for use in ALTI that could already return zeroes for us here when referenced as a D field, like INA & INB perhaps, or would they still return valid input pin data?
e.g.
becomes
Another way I thought of is still to break but use a special reserved break code value (all zeroes or all ones?) with a preceding conditional ALTS that is detected and returns immediately without incurring so much overhead as all the other "normal" codes. The former approach is preferable however, for higher performance. JMPREL is good but it is slower for HUBEXEC and will also interfere with a REP loop.
That doesn't sound any different to a regular conditional branch.
REP blocks INT1/2/3. If it also blocks debug INT0 there's no point having BRK inside a REP loop.