The .exe filename says version 40, but the internals still say version 39 -- might cause some confusion. I did test the new syntax and it work, so this is version 40.
During an email exchange I suggested this syntax to go backward from N-1 to 0.
repeat -BUF_SIZE with i
debug(udec(i))
You seemed to think it was a good idea, but I just check and that syntax doesn't work as I had hoped (loop goes backward, but starts at 0, not BUF_SIZE-1).
Can you change the syntax so that variable comes before count, like it is in every other language that has that sort of construct? Maybe just REPEAT i TO n.
@JonnyMac said:
The .exe filename says version 40, but the internals still say version 39 -- might cause some confusion. I did test the new syntax and it work, so this is version 40.
During an email exchange I suggested this syntax to go backward from N-1 to 0.
repeat -BUF_SIZE with i
debug(udec(i))
You seemed to think it was a good idea, but I just check and that syntax doesn't work as I had hoped (loop goes backward, but starts at 0, not BUF_SIZE-1).
Darn! I forgot to update the version number.
As far as the 'N-1 to 0' thing goes, that would require more space in the cog to implement. Right now, I don't have that space. I will probably get around to it, but it's not going to happen just yet. In the documentation, I point out that N must be positive, not 0 or negative.
Can you change the syntax so that variable comes before count, like it is in every other language that has that sort of construct? Maybe just REPEAT i TO n.
The syntax is an extension of 'REPEAT count'. Also, I don't want to give the impression that we are counting 'TO n', because we are going to n-1.
I just found a serious bug in the Spin2 interpreter that can cause in-line PASM to malfunction.
I was using bit 31 of the RDFAST address as a flag to restore the locals from registers back to hub, just for in-line PASM.
The RDFAST was turning the bytecode pipeline back on. The problem of doing a RDFAST with bit 31 set is that it doesn't wait for the FIFO to begin filling before exiting the instruction. This is fine for when your code provides extra cycles of code before doing an RFBYTE, or something, but if the code was not written to accommodate the early exit from RDFAST, it will pick up garbage on a near bytecode fetch.
I'm surprised this problem wasn't discovered, already. I was noticing my USB code acting weird and I isolated it down to an in-line PASM block that would pass or fail, depending on what its hub address was.
@cgracey said:
I was using bit 31 of the RDFAST address as a flag to restore the locals from registers back to hub, just for in-line PASM.
I'm guessing you meant to say bit 31 of the block length (D operand) rather than the hub address (S operand).
I'm surprised this problem wasn't discovered, already. I was noticing my USB code acting weird and I isolated it down to an in-line PASM block that would pass or fail, depending on what its hub address was.
I have to admit, although I do write a lot of inline pasm, I don't test it much any longer in Pnut/Proptool. Flexspin has been performing admirably for me.
@cgracey said:
I was using bit 31 of the RDFAST address as a flag to restore the locals from registers back to hub, just for in-line PASM.
I'm guessing you meant to say bit 31 of the block length (D operand) rather than the hub address (S operand).
I'm surprised this problem wasn't discovered, already. I was noticing my USB code acting weird and I isolated it down to an in-line PASM block that would pass or fail, depending on what its hub address was.
I have to admit, although I do write a lot of inline pasm, I don't test it much any longer in Pnut/Proptool. Flexspin has been performing admirably for me.
Wait! I was all wrong about this. Yes, it was bit 31 of the address (S), which doesn't matter. It was not bit 31 of the block-count (D), which would have caused a problem. I wasn't remembering the difference. I had made the modification to not allow bit 31 to go high on the address and the problem persisted, of course, So, it is something else.
My problem appears to be related to using RDPIN on the USB base pin, from multiple cogs. If I switch to RQPIN, the USB pin doesn't get an acknowledge signal asynchronous to the main activity, and this seems to stop the problem. However, it should not matter whether the pin sees an ACK from RDPIN, or not, because it only affects the return IN bit. Somehow, though, it is hosing up the main driver cog's activity, making it think that it is okay to send another byte when it's actually not ready for another byte. I'm looking into the Verilog code to try to find the cause of the interference.
Thanks for pointing this block-count thing out. Looks like there's no need for another version, for now.
UpdateVfd() is executed once per millisecond because it internally waits for 24 samples of a 24kHz ADC smart pin. The default debug baud rate is 2MBd so if the debug() outputs roughly 20 bytes each time it should take around 100µs to send them. I think the code doesn't have a problem with this and is executed in real-time because the whole loop actually takes 10s to complete and I can see from the RX LED of the progplug that debugging output stops after that 10s.
However the scope display on the PC is delayed. I can see the debug terminal window still scrolling and the scope window drawing dots long after the 10s period. I estimate that it takes more than 20s until everything comes to a stop. Because I know for sure that the hardware transmission has already stopped there must be a buffer of at least 100kB that gets filled up and is being processed later at the PC side.
Is this normal? I have an almost brand new i5-13 System but it's still too slow? If I modify the code so that the debug() is only executed once every 10th loop iteration then the PC can keep up in real-time without problems. This is not a big deal because I can't see more than 50fps anyway. I'm just curious if I did something wrong.
@ManAtWork said:
.... UpdateVfd() is executed once per millisecond because it internally waits for 24 samples of a 24kHz ADC smart pin. The default debug baud rate is 2MBd so if the debug() outputs roughly 20 bytes each time it should take around 100µs to send them. I think the code doesn't have a problem with this and is executed in real-time because the whole loop actually takes 10s to complete and I can see from the RX LED of the progplug that debugging output stops after that 10s.
However the scope display on the PC is delayed. I can see the debug terminal window still scrolling and the scope window drawing dots long after the 10s period. I estimate that it takes more than 20s until everything comes to a stop. Because I know for sure that the hardware transmission has already stopped there must be a buffer of at least 100kB that gets filled up and is being processed later at the PC side.
Is this normal? I have an almost brand new i5-13 System but it's still too slow? If I modify the code so that the debug() is only executed once every 10th loop iteration then the PC can keep up in real-time without problems. This is not a big deal because I can't see more than 50fps anyway. I'm just curious if I did something wrong.
Screen scrolling, is very slow.
When we did terminal work, I recall changing the display side to separate from RX, as it never kept up at higher flow speeds.
or maybe that's also a USB small-packet issue ?
I did some tests with USB small packet echos, and windows and USB bridges and USB frames do impose a ceiling, and I also found a HUB that imposed a 1ms limit too, to all traffic.
These tests send 2 bytes and waited for a 2 byte echo before sending next 2 bytes at 2Mbd.
This reveals windows + USB bridge latencies, for things like RS485 networks.
' XR21B1420 Sending Chars : 5002 TX.RX Done,eCount 5000 eLoops 2500 mCount 2500 mFixUp 0
' SW delay = 0.5426966454833746 Packet average time = 0.0002169918614487703
' Waiting Chars A = 2 Waiting Chars B = 0 Post RX Done, LOC(cCom) 0 RxCount 2 RxLoops 1
' CH9102 Sending Chars : 5002 TX.RX Done,eCount 5000 eLoops 2500 mCount 2500 mFixUp 0
' SW delay = 0.4118906036019325 Packet average time = 0.0001646903652946551
' Waiting Chars A = 2 Waiting Chars B = 0 Post RX Done, LOC(cCom) 0 RxCount 2 RxLoops 1
' CH347 Sending Chars : 5002 TX.RX Done,eCount 5000 eLoops 2500 mCount 2500 mFixUp 0
' SW delay = 5.073910656385124 Packet average time = 0.00202875276144947 << still slower than CH9102
' Waiting Chars A = 2 Waiting Chars B = 0 Post RX Done, LOC(cCom) 0 RxCount 2 RxLoops 1
'PL2303 Sending Chars : 5002 TX.RX Done,eCount 5000 eLoops 2500 mCount 2500 mFixUp 604 << much faster but still needs fixups
' SW delay = 0.4224882982671261 Packet average time = 0.0001689277482075674
' Waiting Chars A = 2 Waiting Chars B = 0 Post RX Done, LOC(cCom) 0 RxCount 2 RxLoops 1
' FT232H stays at 1.00ms as it has timer set to 1ms ??
'
'CP2102N Sending Chars : 5002 TX.RX Done,eCount 5000 eLoops 2500 mCount 2500 mFixUp 0
' SW delay = 2.304060092195869 Packet average time = 0.0009212555346644816 << only slightly faster
' Waiting Chars A = 2 Waiting Chars B = 0 Post RX Done, LOC(cCom) 0 RxCount 2 RxLoops 1
FTDI were worst here, but some other parts do manage 200us or a little better.
I was surprised the WCH FS-USB CH9102 (CH343?) measured better than their HS-USB CH347 here.
Maybe the drivers are not identical, or the new CH347 needs a firmware update ?
There might be a one-packet-per-ms limit on the USB side but I think that's not the limiting factor in my case. As I said, the RX LED on the progplug goes off after the loop has finished. And there is very little buffer memory inside the propeller and the FTDI chip. It definitely can't hold 100kB of data. So the delay must be in the PC software.
Scrolling of the text terminal debug window of PNut might be the cause. But there is no way of disabling that. Even if all of the debug() output goes to the scope window the text window is still updated for every line.
As I suggested earlier, it might be a good idea to print the debug text to an internal buffer, only, and update the visible window only 10 times per second. Noone can read it faster anyway. And the hidden buffer would allow re-painting the window in the case of a resize or drag event.
DoSETUP(bytes($21,$09,$00,$02,$00,$00,$01,$00)) 'turn on LEDs
Now, byte/word/long arrays can be conveniently expressed in Spin2, right where you use them, so there's no need to make a DAT reference for simple arrays. The array is placed right in the compiled Spin2 code and the method returns the address of the array. It's like STRING(), but for data of any word size. Size overrides are allowed on data, too (BYTE/WORD/LONG).
LSTRING() is like STRING(), but places a length byte at the start of the string and the string can contain zeros.
Hmm, LSTR just gives you a string prepended with it's length? I'd have expected an equivalent of the LSTR debug instruction, which is a pointer + length pair (= a string slice)
This is a nice feature, but it's another breaking change... the word bytes in particular may well be used as a variable in some existing code. This kind of breakage of old code is precisely what programmers get frustrated with on "big" systems like WIndows and Linux. With more and more objects being added to the OBEX, it'd be nice to find some way to ensure that those objects will work with future compilers. Some proposals:
(1) We could start all new keywords with % (so %BYTES, %WORDS, and so on). Right now a letter is not legal after a percent sign, so this shouldn't break anything, and provides a huge namespace for expansion.
(2) We could require a special comment like {$v42} at the start of any program that uses the new keywords. To make it easier for tools (like vscode) this comment should be the first thing in the file and the syntax should be as simple as possible. If the comment is missing, or if the version given in the comment is less than the version needed for the keyword, the keyword is just a regular identifier. This could get slightly messy with objects (the compiler would have to keep track of the version of each object separately) but would provide a clean way to upgrade the language without breaking existing OBEX objects.
That seems like a good idea. @word[something] is already used (I think?), but regular brackets instead of square seem to be a sufficient differentiator.
@Wuerfel_21 said:
That seems like a good idea. @word[something] is already used (I think?), but regular brackets instead of square seem to be a sufficient differentiator.
I was just thinking something similar.... byte[address] exsits, but not byte(...) ??
So would it be a solution to define an inline byte array as arrayAddress := byte($00,$10,$32)
ie. just drop the plural ?
Or make the compiler smart enough to understand that if bytes/words/longs is not followed by the method syntax it may well be a variable name (preview):
@macca said:
Or make the compiler smart enough to understand that if bytes/words/longs is not followed by the method syntax it may well be a variable name (preview):
That doesn't work if you have function or function pointer called bytes
@macca said:
Or make the compiler smart enough to understand that if bytes/words/longs is not followed by the method syntax it may well be a variable name (preview):
That doesn't work if you have function or function pointer called bytes
Maybe yes, if the program definitions takes precedence over the system definitions.
May be confusing as you can override a system function (or maybe an opportunity ?), but if limited to the new keywords it could make the source compatible without much trouble.
There are several nice suggestions for dealing with the specific issue of the new bytes, words, longs keywords, but honestly I think these miss the forest for the trees. Sooner or later we're going to need a way to introduce new keywords into the Spin2 language without breaking existing code. So how about a version identifier for objects? Something like {$v42} as the very first line of an object to indicate that it requires version 42 of the language (i.e. the language as implemented by PNut version 42)?
Even easier would be to not introduce any new keywords that can conflict, at all, either by creative re-use of existing keywords and syntax (as some have suggested here for byte and @) or by using a different namespace for keywords (e.g. by starting new keywords with %).
Comments
The .exe filename says version 40, but the internals still say version 39 -- might cause some confusion. I did test the new syntax and it work, so this is version 40.
During an email exchange I suggested this syntax to go backward from N-1 to 0.
You seemed to think it was a good idea, but I just check and that syntax doesn't work as I had hoped (loop goes backward, but starts at 0, not BUF_SIZE-1).
Can you change the syntax so that
variable
comes beforecount
, like it is in every other language that has that sort of construct? Maybe justREPEAT i TO n
.Darn! I forgot to update the version number.
As far as the 'N-1 to 0' thing goes, that would require more space in the cog to implement. Right now, I don't have that space. I will probably get around to it, but it's not going to happen just yet. In the documentation, I point out that N must be positive, not 0 or negative.
The syntax is an extension of 'REPEAT count'. Also, I don't want to give the impression that we are counting 'TO n', because we are going to n-1.
I got the displayed-version problem fixed in the top post and the repo.
I fixed the floating-point-equality-operators bug and posted a new PNut_v41 at the top of this thread.
Thanks, Timmoore and TonyB_, for reporting the bug and solving the actual problem in the interpreter.
I just found a serious bug in the Spin2 interpreter that can cause in-line PASM to malfunction.
I was using bit 31 of the RDFAST address as a flag to restore the locals from registers back to hub, just for in-line PASM.
The RDFAST was turning the bytecode pipeline back on. The problem of doing a RDFAST with bit 31 set is that it doesn't wait for the FIFO to begin filling before exiting the instruction. This is fine for when your code provides extra cycles of code before doing an RFBYTE, or something, but if the code was not written to accommodate the early exit from RDFAST, it will pick up garbage on a near bytecode fetch.
I'm surprised this problem wasn't discovered, already. I was noticing my USB code acting weird and I isolated it down to an in-line PASM block that would pass or fail, depending on what its hub address was.
I will post a new version very soon here.
I'm guessing you meant to say bit 31 of the block length (D operand) rather than the hub address (S operand).
I have to admit, although I do write a lot of inline pasm, I don't test it much any longer in Pnut/Proptool. Flexspin has been performing admirably for me.
Wait! I was all wrong about this. Yes, it was bit 31 of the address (S), which doesn't matter. It was not bit 31 of the block-count (D), which would have caused a problem. I wasn't remembering the difference. I had made the modification to not allow bit 31 to go high on the address and the problem persisted, of course, So, it is something else.
My problem appears to be related to using RDPIN on the USB base pin, from multiple cogs. If I switch to RQPIN, the USB pin doesn't get an acknowledge signal asynchronous to the main activity, and this seems to stop the problem. However, it should not matter whether the pin sees an ACK from RDPIN, or not, because it only affects the return IN bit. Somehow, though, it is hosing up the main driver cog's activity, making it think that it is okay to send another byte when it's actually not ready for another byte. I'm looking into the Verilog code to try to find the cause of the interference.
Thanks for pointing this block-count thing out. Looks like there's no need for another version, for now.
I have a question about the graphical debug windows. I use VSC and FlexSpin for compiling and PNut as debug display.
UpdateVfd() is executed once per millisecond because it internally waits for 24 samples of a 24kHz ADC smart pin. The default debug baud rate is 2MBd so if the debug() outputs roughly 20 bytes each time it should take around 100µs to send them. I think the code doesn't have a problem with this and is executed in real-time because the whole loop actually takes 10s to complete and I can see from the RX LED of the progplug that debugging output stops after that 10s.
However the scope display on the PC is delayed. I can see the debug terminal window still scrolling and the scope window drawing dots long after the 10s period. I estimate that it takes more than 20s until everything comes to a stop. Because I know for sure that the hardware transmission has already stopped there must be a buffer of at least 100kB that gets filled up and is being processed later at the PC side.
Is this normal? I have an almost brand new i5-13 System but it's still too slow? If I modify the code so that the debug() is only executed once every 10th loop iteration then the PC can keep up in real-time without problems. This is not a big deal because I can't see more than 50fps anyway. I'm just curious if I did something wrong.
Screen scrolling, is very slow.
When we did terminal work, I recall changing the display side to separate from RX, as it never kept up at higher flow speeds.
or maybe that's also a USB small-packet issue ?
I did some tests with USB small packet echos, and windows and USB bridges and USB frames do impose a ceiling, and I also found a HUB that imposed a 1ms limit too, to all traffic.
These tests send 2 bytes and waited for a 2 byte echo before sending next 2 bytes at 2Mbd.
This reveals windows + USB bridge latencies, for things like RS485 networks.
FTDI were worst here, but some other parts do manage 200us or a little better.
I was surprised the WCH FS-USB CH9102 (CH343?) measured better than their HS-USB CH347 here.
Maybe the drivers are not identical, or the new CH347 needs a firmware update ?
There might be a one-packet-per-ms limit on the USB side but I think that's not the limiting factor in my case. As I said, the RX LED on the progplug goes off after the loop has finished. And there is very little buffer memory inside the propeller and the FTDI chip. It definitely can't hold 100kB of data. So the delay must be in the PC software.
Scrolling of the text terminal debug window of PNut might be the cause. But there is no way of disabling that. Even if all of the debug() output goes to the scope window the text window is still updated for every line.
As I suggested earlier, it might be a good idea to print the debug text to an internal buffer, only, and update the visible window only 10 times per second. Noone can read it faster anyway. And the hidden buffer would allow re-painting the window in the case of a resize or drag event.
I posted a new v42 at the top of this thread.
v42 - Added LSTRING()/BYTE()/WORD()/LONG() methods.
Now, byte/word/long arrays can be conveniently expressed in Spin2, right where you use them, so there's no need to make a DAT reference for simple arrays. The array is placed right in the compiled Spin2 code and the method returns the address of the array. It's like STRING(), but for data of any word size. Size overrides are allowed on data, too (BYTE/WORD/LONG).
LSTRING() is like STRING(), but places a length byte at the start of the string and the string can contain zeros.
Neat! So with the override, is this the right format...?
Yes. BYTES() would return a pointer to:
$21,$09,$00,$23,$44,$10,$02,$00,$00,$45,$01,$00
@cgracey Highlighting added to our VSCode extension. (Including hover documentation for these.)
Super!!
Hmm, LSTR just gives you a string prepended with it's length? I'd have expected an equivalent of the LSTR debug instruction, which is a pointer + length pair (= a string slice)
This is a nice feature, but it's another breaking change... the word
bytes
in particular may well be used as a variable in some existing code. This kind of breakage of old code is precisely what programmers get frustrated with on "big" systems like WIndows and Linux. With more and more objects being added to the OBEX, it'd be nice to find some way to ensure that those objects will work with future compilers. Some proposals:(1) We could start all new keywords with
%
(so%BYTES
,%WORDS
, and so on). Right now a letter is not legal after a percent sign, so this shouldn't break anything, and provides a huge namespace for expansion.(2) We could require a special comment like
{$v42}
at the start of any program that uses the new keywords. To make it easier for tools (like vscode) this comment should be the first thing in the file and the syntax should be as simple as possible. If the comment is missing, or if the version given in the comment is less than the version needed for the keyword, the keyword is just a regular identifier. This could get slightly messy with objects (the compiler would have to keep track of the version of each object separately) but would provide a clean way to upgrade the language without breaking existing OBEX objects.Don't even have to look hard for a use of
bytes
as a variable. Yeah that's pretty bad.Wonder if you could just use the new @ operator instead...
Could you have @($21,$09,long $10442300,$02,$00,$00,word $0145,$00) ?
one thing with that, is the default variable type could be byte, word or long. So maybe the @ would still need one of those keywords after it?
That seems like a good idea.
@word[something]
is already used (I think?), but regular brackets instead of square seem to be a sufficient differentiator.I was just thinking something similar.... byte[address] exsits, but not byte(...) ??
So would it be a solution to define an inline byte array as
arrayAddress := byte($00,$10,$32)
ie. just drop the plural ?
Or make the compiler smart enough to understand that if bytes/words/longs is not followed by the method syntax it may well be a variable name (preview):
That doesn't work if you have function or function pointer called bytes
Maybe yes, if the program definitions takes precedence over the system definitions.
May be confusing as you can override a system function (or maybe an opportunity ?), but if limited to the new keywords it could make the source compatible without much trouble.
I like the unambiguous version with the existing keywords better...
Still thinking nice if @ operator expanded to cover this...
Maybe with byte the default, but could use long(), word()...
So, could do ser.str(@("This is a test.",13,0)), for example
There are several nice suggestions for dealing with the specific issue of the new bytes, words, longs keywords, but honestly I think these miss the forest for the trees. Sooner or later we're going to need a way to introduce new keywords into the Spin2 language without breaking existing code. So how about a version identifier for objects? Something like
{$v42}
as the very first line of an object to indicate that it requires version 42 of the language (i.e. the language as implemented by PNut version 42)?Even easier would be to not introduce any new keywords that can conflict, at all, either by creative re-use of existing keywords and syntax (as some have suggested here for byte and @) or by using a different namespace for keywords (e.g. by starting new keywords with
%
).