PDA

View Full Version : A tighter VM loop for LMMs



Phil Pilgrim (PhiPi)
01-14-2008, 02:08 AM
The now-"traditional" way to implement a large memory model (LMM) virtual machine (VM) involves the following code (rf: Bill Henning (http://forums.parallax.com/showthread.php?p=615022), hippy (http://forums.parallax.com/showthread.php?p=698714), et al.):




org 0
_vm mov pc,par

_vm_lp rdlong _xeq,pc
add pc,#4
_xeq nop
jmp #_vm_lp




Due to the fact that the rdlong is always one instruction too late to meet its appointment with the hub, the loop executes in 32 clocks, making the LMM:PASM speed ratio 1:8. As Bill Henning points out, this could be improved by inlining more rdlong/add/nop triads; but the jmp back to the beginning will always incur loss of synchronization with the hub and cost an extra 16 clocks. He comments further that an autoincrement facility could eliminate the add and tighten the loop — if only such a facility existed in the Propeller.

This got me thinking about other ways to eliminate a step in the VM loop, thus reducing it to 16 clocks from 32. My first thought was to use phsa for the program counter. In a 16-clock loop, it would increment 16 times. This would mean that the machine instructions being emulated would have to be put into RAM on four-long intervals. The gaps in between instructions could be filled with other instruction threads, but this kind of interleaving would be an absolute mess. And this does not even begin to deal with the emulation of jmps. So that idea was quickly abandoned.

The other option is to combine the loop back with an increment of the program counter. The only instruction available to modify a register and jump in one instruction cycle is djnz. Not only does this increment in the wrong direction, it does it by one, not four. But wait! There are two factors in our favor here:
The source address for a rdlong doesn't have to be a multiple of four. The instruction merely ignores the last two bits.

This is a virtual machine, after all. We can make instructions run backwards in memory if we want.
With these two principles in mind, I came up with the following VM loop:




org 0
_vmr mov pc,par
jmp #_go

_xeq1 nop
rdlong _xeq0,pc
sub pc,#7
_xeq0 nop
_go rdlong _xeq1,pc
djnz pc,#_xeq1




Each time around the loop, two instructions are executed, pc gets decremented by eight, and 32 clocks transpire. This makes the VMM:PASM speed ratio 1:4, instead of 1:8. Now pc doesn't decrement by four at each step. It alternately decrements by one and seven. But what's important is that pc[15..2] decrement at a constant rate, which it does here.

There are a couple implications of this loop:
The emulated code has to run backwards in memory. Unless one wants to write it that way, some sort of preprocessor will be necessary to reformat normal-looking code. A compiler, of course, is one example of a preprocessor.

VM subroutines that get called by the emulated code will have to keep pc in phase with the 1-and-7 decrementing. The simplest way to do this would be to andn pc,#3 and always return to _go.
My intent is to write a simple preprocessor for this VM and make it available on the web for people to try.

-Phil

deSilva
01-14-2008, 02:48 AM
Phil, this is ingenious!
Though you should check this:

Phil Pilgrim (PhiPi) said...
* The source address for a rdlong doesn't have to be a multiple of four. The instruction merely ignores the last two bits.

I doubt this statement... Parallax has always emphazised that such kind af addressing yields undefined results. I can very well imagine that the two LSBs - which will be used for the word and byte exraction - can have side effects also with RDLONG.....

hippy
01-14-2008, 02:48 AM
@ Phil : Excellent and inspired thinking.

It would be possible to just reverse the entire table of LMM code at runtime if it were linear in a contiguous block and also 'reverse' any addresses when first called or used within the LMM code. That would allow Reversed-LMM to be more easily written within the PropTool.

The overhead of reversing addresses when needed would be minimal to the gains of hitting hub access sweet spots and the entire LMM code reversal is a one-time per boot-up process. If LMM jump destinations were +/- offsets rather than absolute, that would minimise the overhead further.

If the LMM code is reversed at run time it can be written back to the Eeprom making it a once-per download occurrence, no extra overhead on subsequent boot-up. It's entirely possible to determine if code is running from Eeprom ( power-on/F11 ) or Ram (F10) and behave appropriately on each.

hippy
01-14-2008, 02:55 AM
@ deSilva : I haven't read any Parallax statements on the lsb's but I've not seen any problems caused by them not both being zero for rdlong in practice. That doesn't mean there aren't potential problems.

Phil Pilgrim (PhiPi)
01-14-2008, 02:57 AM
deSilva,

I'm going by the original Assembly Language Instruction Set table which states, for rdlong, "Read main memory long S[15..2] into D." I've tested the loop on some simple code, and it works.

-Phil

Phil Pilgrim (PhiPi)
01-14-2008, 03:01 AM
Hippy,

The VM I plan to implement would be both relocatable and reentrant, so all jumps and calls would be relative. I also plan to implement short branches, which would not entail calls outside of the VM loop.

-Phil

hippy
01-14-2008, 09:12 AM
This looks very promising. I modified my LmmVm to do this Reverse LMM, and it was interesting to compare the two - Version 002 = traditional, version 003 = Reversed.

No change in "LmmTest1", but "LmmTest2" deteriorated, 4m 20s dropping to 4m 40s. That's probably because there are few sequential native instructions and more calls into the kernel. Hub access sweet spots probably got shifted and missed for "LmmTest2". Neither fair tests, and not enough here to compare the two.

I rewrote "Test1" to add four add's and four sub's so functionality was unchanged but more contiguous native instructions to execute ....




m:ss sec ratio

Traditional LMM 7:20 440 1:7.4 1 x rdlong + 1 x jmp
Unrolled LMM 5:00 300 1:5 4 x rdlong + 1 x jmp
Reversed LMM 4:40 280 1:4.7
Native PASM 1:00 60 1:1





A 30% improvement there over traditional LMM and getting very close to that 1:4 nirvana.

Added : It also performed slightly better in this specific test better than the traditional LMM with an unrolled loop.

Post Edited (hippy) : 1/14/2008 4:21:49 AM GMT

cgracey
01-14-2008, 09:34 AM
deSilva said...
Phil, this is ingenious!
Though you should check this:

Phil Pilgrim (PhiPi) said...
* The source address for a rdlong doesn't have to be a multiple of four. The instruction merely ignores the last two bits.

I doubt this statement... Parallax has always emphazised that such kind af addressing yields undefined results. I can very well imagine that the two LSBs - which will be used for the word and byte exraction - can have side effects also with RDLONG.....
This is okay! In a RDLONG/WRLONG the two address LSBs are ignored. In a RDWORD/WRWORD only one address LSB is ignored. In a RDBYTE/WRBYTE all address bits are used. Of course, bits 31..16 are always ignore in every case, as there's only 64KB of main memory.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


Chip Gracey
Parallax, Inc.

cgracey
01-14-2008, 09:37 AM
Phil, that's ingenious, all right! Good job!

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


Chip Gracey
Parallax, Inc.

Phil Pilgrim (PhiPi)
01-14-2008, 10:49 AM
Thanks, guys! There's more to come. It may take a couple days, though...

-Phil

hippy
01-14-2008, 10:46 PM
I've done some more testing with my LmmVm ...




m:ss sec ratio

Traditional LMM 7:20 440 1:7.4 1 x rdlong + 1 x jmp
Unrolled LMM 5:40 340 1:5.7 2 x rdlong + 1 x jmp
Unrolled LMM 5:20 320 1:5.4 3 x rdlong + 1 x jmp
Unrolled LMM 5:00 300 1:5 4 x rdlong + 1 x jmp
Unrolled LMM 4:40 280 1:4.7 5 x rdlong + 1 x jmp
Unrolled LMM 4:40 280 1:4.7 6 x rdlong + 1 x jmp
Reversed LMM 4:40 280 1:4.7
Native PASM 1:00 60 1:1





So some general observations on LMM which I believe may be useful ...

Always unroll a traditional LMM loop. Even one un-rolling gives a massive speed improvement for minimal extra code use ( three longs ).

Unrolling is a matter of diminishing returns. It eats up more code space and is limited by how many contiguous native instructions there are between calls into kernel. The ideal amount of unrolling depends on the contiguous nature of native instructions being interpreted.

Reversed LMM delivers performance as good as any unrolled LMM. It would be, it never misses a 'hub access sweet spot'. It uses a fixed amount of code space, less than a 2 x rdlong unrolled loop. There is no, "how much unrolling is ideal?", question to answer.

Inefficiencies in calls to the kernel have an impact on overall performance. Hub access sweet spots may get missed there. The more calls, the greater the impact. The greater frequency of kernel calls, the closer to traditional unrolled LMM performance every implementation becomes.

Choice of LMM branch addressing mechanism can have a major effect. Absolute addresses usually need adjusting ( can be done at compile time for traditional LMM ), and are more complicated for reversed LMM. Relative +/- addressing is faster to execute but not easy to write using the PropTool; needs an "@" symbol for current hub address like "$" for current cog address to allow relative offsets to be calculated, "long @-@Label". Third-party tools can overcome this lack.

Using the PropTool to write LMM code necessitates overheads to make life easier for the LMM coder. Third party tools would minimise such overheads. The overheads appear to be the same regardless of LMM type.

The 1:4 nirvana for LMM will never be reached in a real-world program, but can be brought close. Efficient calls into the kernel are the key there. The test program used for these benchmarks is not ideal as it does its own hub access which derails smooth running.

Debugging reversed LMM kernel calls can be a PITA; descending addresses and not long aligned. Working out when non-aligned addresses have to be corrected or can simply be added to or subtracted from takes some thought and paper-working. Always aligning could add unnecessary overhead and waste code space.

It is not easy to produce a single source code LMM implementation which can have compile-time or run-time selection of traditional / unrolled and reversed LMM; "add pc" needs to become "sub pc" in numerous places. Kernel developing and debugging with traditional LMM and then switching to reversed LMM is easy and would probably be preferable. No conditional compilation supported by the PropTool though so third-party tools or extensive error-prone manual editing need to be used.

Reversed LMM may be complicated for the LMM developer, but not for the end-user. The end-user should see no difference in what they have to do to use and code in LMM regardless of implementation.

Reversed LMM has an overhead in needing to reverse the LMM code, either at run-time, or at compile time using third-party tools. Run-time reversal of LMM code is not excessively time consuming nor code space hungry, can be coded in PASM for maximum speed. Trying to write LMM code in reverse order using the PropTool is unnatural and hard, but could be done.

All-in, from my perspective, Traditional LMM is easier for kernel development, reversed LMM better for delivery. Phil deserves heaps of praise for his innovative thinking and solution.

Phil Pilgrim (PhiPi)
01-15-2008, 03:09 AM
Wow, hippy, you've really put a lot of work into this! Thanks for the thorough analysis!

The alignment issues for the reverse VM are something I'm glad you brought up. I was planning on mixing longs and words in the VM code, so the preprocessor will have to be sure to start on a long boundary and always issue words in even multiples.

The absolute vs. relative addressing issue is a complex one. Choosing to use relative addressing seemed like a no-brainer at first, since you can't determine absolute hub addresses at compile time, and the VM would have to add the object offset to get the target address. OTOH, each jump uses up an extra long for the target offset. This could be eliminated for absolute jumps by using a cog-resident address table. Then you'd encode the address table index into the destination field of the jmp (actually a jmpret ... nr) to the VM jmp handler (_jmp):




jmp target becomes
jmpret target_reg,#_jmp nr
...
target_reg word @target 'This line goes into the cog space within the VM.




Of course, no more than 512 target addresses can be accommodated this way. For that reason, it won't work well for relative jumps, since every relative address could different, and every jmp would thus require a different table entry. Also, indexing the table and unpacking adjacent words adds to the VM overhead.

Short branches, OTOH, are easy:




br target becomes
:here add _pc,#@target-@:here 'for forward branches
:here sub _pc,#@:here-@target 'for backward branches

'Note: Can't use $ here, since it's relative to the last org.




This provides a branch range of ±128 instructions.

One advantage to using a jump table is that it becomes possible for a pre-processor to optimize between jmps and brs. This is because each type has the same compiled length. So a decision about which type to use won't affect decisions that have already been made in the code above it.

A final note about traditional vs. backward VMs: With a traditional VM, preprocessing could be done with a good macro facility. That's simply not possible with the reverse VM unless the reversal is done at runtime. Also, to accommodate hyperthreading (two processes running in parallel by taking turns), the reverse VM has to be unrolled to four fetches, decrementing _pc0 by 7 and 1, and _pc1 by 4 and 4.

-Phil

Post Edited (Phil Pilgrim (PhiPi)) : 1/14/2008 7:51:39 PM GMT

hippy
01-15-2008, 03:35 AM
You gave me a +30% improvement in execution speed and pointed out the major flaw in my simple unrolled loop, so I thought it was only fair to help return the favour http://forums.parallax.com/images/smilies/smile.gif

I think VM's and LMM's are going to become increasingly more useful and important as time goes on. I actually see LMM as an integral part of any generic VM now as well as in their own right. A 4MIPS upwards LMM on a 20MIPS core is quite impressive.

Relative addressing isn't too bad in my implementation because I replace "jmp #dst" with a "jmp #LMM_jmp" with a word/long following holding the hub address and that would simply need converting to +/-. It does need an extra hub access though to fetch it. This is where I lose efficiency I think in my LMM implementation.

I'm not sure about using a jump table as it eats into available cog memory and limits user register use and number of jumps. Ultimately every LMM compiler should generate its own LMM kernel tailored to what it needs so it can choose 'the best strategy' for any given source code. That's possibly a bit of a way off though !

As we head towards 'optimal performance' it becomes easier to look at each individual issue in isolation and way up the pro's and con's of each possibility.

Phil Pilgrim (PhiPi)
01-15-2008, 04:19 AM
Hippy,

In your VM code you have a constant value of $0010 - 4 assigned to k_CogBp_Minus_4. This is added to the @addrs to get the real hub address. I take it that this value is application-specific, depending on where the object is actually loaded, right? (As you can see, I'm still dithering about the relative address thing! http://forums.parallax.com/images/smilies/smile.gif )

As to the jump table, I agree that it could take up some precious space. However, I see some offsetting considerations: It can't take up more than 256 longs, since the word index is limited to 0 - 511.
Since the user code is executed from the hub, much more room is available in the cog, anyway, than with PASM code (assuming the stack is in the hub).
A jump table would free the user from having to make the br/jmp decision for each jump in his program. The preprocessor could do the optimizing, since all addresses are known after the first pass, prior to any optimization.
The jump table could contain absolute addresses, since the object offset can be added to each one just once. (Of course, this is negated by the unpacking overhead.)
With a jump table, there'd be no advantage to implementing djnz, tjz, and tjnz as special cases. They could simply be preprocessed to:


sub reg,#1 wz 'DJNZ
if_nz jmpret table_index,#_jmp nr

test reg wz 'TJZ
if_z jmpret table_index,#_jmp nr

test reg wz 'TJNZ
if_nz jmpret table_index,#_jmp nr




Of course, implementing the latter point disturbs the zero flag, but that's a pretty minor thing.

Here's a thought for the jump table: Have two jump processors, one for even table (word) indices and one for odd table indices. One of them will need to shift the table entry and the other one won't. The preprocessor can make a simple determination whether to assign an even or odd address, based on how often the associated label is referenced. (This isn't optimum, of course, since a static count may not reflect the dynamic reality.)

-Phil

Post Edited (Phil Pilgrim (PhiPi)) : 1/14/2008 8:36:24 PM GMT

hippy
01-15-2008, 05:05 AM
you have a constant value of $0010 - 4 assigned to k_CogBp_Minus_4 ...

Yes, when using "long @Label" the address put into hub memory is offset to the base of the object it's in, and that starts at $0010 for the top-level object / main program. It would need to change if the LMM is in a sub-object.

The -4 is because an 'add #4' follows 'add k_CogBp_Minus_4'; it was easier to do it that way than add the $0010 and jump round the second addition.

I've not written much LMM which uses loops or jumps/calls to other LMM so I'm not an expert ( and also the overhead hasn't been much of a concern ). When I did do it it was before I'd done LmmVm and was a nightmare. I like the ideal of embedding the offset in the jmp/jmpret, +/-256 longs. Maybe do that, and force a different/more complex jump when needed. Hopefully there will only be few of the later. I'll have a think on the options.

Phil Pilgrim (PhiPi)
01-15-2008, 08:52 AM
The devil is certainly in the details. Implementing a jump table is harder than I thought it would be. Here are some options for jmp #label:

1. mov _pc, _table_reg

Advantages: Fast, inline, and uses only one instruction.
Disadvantages: Inefficient use of cog RAM, since each table entry is a long. Can't be used with reverse VM, since there's no way to tell what the last two bits of _pc were without adding a bunch more instructions.

2. add _pc, _table_reg

Advantages: Fast, inline, uses only one instruction, and can be used with reverse VM.
Disadvantages: Requires one long table entry for each jump, instead of one for each target label => more entries.

3. jmpret _table_reg, #_jmp0 nr 'or _jmp1 for high word in table entry

Advantages: Can use word-packed jump tables, since it executes external VM code to do the unpacking.
Disadvantages: Using with any unrolled VM loop entails rolling back the PC and issuing another rdlong just to access the destination field (which contains the register address). This is because you don't know which of the multiple VM-loop "sockets" the jump to _jmpn issued from.

The reversed-VM code for the third option would look something like this:




_jmp0 add _pc,#4 'Repoint to the instruction.
rdlong _inst0,_pc 'Reread it.
shr _inst0,#9 'Get the jump table address (destination field) into source position.
xor _inst0,_mod2mov 'Convert the instruction to: mov _pc,reg
nop 'Wish there was something useful to put here.
_inst0 nop 'Load _pc with jump address.
' shr _pc,#16 Used only in _jmp1 to access high word.
jmp #_go

...

mod2mov long (%1010000_0000_0100_100011111 ^ _pc) << 9




That's a lot of screwing around (7 or 8 instructions, including a hub op) to do a jump. OTOH, by keeping the user jmp code confined to a single long, it allows the preprocessor to optimize out most of the jumps by changing them to relative branches. So it may actually save time in the long run.

-Phil

hippy
01-15-2008, 01:28 PM
I think I've found a problem with trying to embed a cog register address inside a native
instruction call into the kernel. For example, to load a cog register with a 32-bit constant
I currently use ...




jmp #LMM_Load
long <reg>
long <constant>





better is ...




jmpret <reg>,#LMM_Load NR ' or ... long LMM_CALL | LMM_Load | <reg> << 9
long <constant>





When LMM_Load gets executed, in the original ( <reg> in next long ) the pc already
points to that long, so a simple rdlong and put the <reg> where necessary to do it.

Trying to extract the <reg> from the instruction is easy in the traditional LMM when
not unrolled, it's in cog register _xeq where the jmp was executed. If there's two
or more _xeq's ( as with unrolled LMM and reveresed LMM ) when one hits LMM_Load
one doesn't know which _xeq to retrieve <reg> from without checking to see what
pc holds.

It may be that the rdlong doesn't hit a sweet spot so the extra overhead may be no
greater. Testing the pc would require using C or Z which means having to preserve
those flags ( avoided so far ) or some hackery ( untested ) to get round that ...




mov tmp,pc
and tmp,#1
djnz tmp,#:Skip
mov _xeq1,_xeq0
:Skip movd :Opc,_xeq0
nop
:Opc rdlong 0-0,pc




I'm going to have to do some further testing.

Phil Pilgrim (PhiPi)
01-15-2008, 01:55 PM
Using jmpret <reg>, @LMM_op nr with unrolled code, I haven't found a way around backing up and rereading the command from the hub to retrieve the destination field. I just reread your code. Now I see what you did. Very clever!

One issue I see with having long constants embedded in the code is that making the statement that uses it conditional is harder. That's because the "condition bits" in the constant might not all be zero, which is necessary to make the constant a nop. Of course, you could have 16 different LMM_load commands, one for each condition code, so you could leave the four CC bits zero in the constant and fill them in in the VM; but that would be stretching things!

I think my approach would be similar to the jump table idea: Just put the constants above #511 into cog registers and load them from there. (Of course, I never was planning to put the stack in the cog, so there seems like there'd be room aplenty. But I might get fooled.)

-Phil

Post Edited (Phil Pilgrim (PhiPi)) : 1/15/2008 6:53:58 AM GMT

Ariba
01-15-2008, 02:49 PM
I also made my own LMM kernel (traditional, 4 times unrolled) with an Assembler for it. All is in a very early state and not ready to release.

Hippy, I also use a LoadLong instruction that loads the next long in a dedicated register, but your idea with a Destination Register is very smart. Here is a code, that should do it:



sub pc,#4
rdlong xeq0,pc 'reread the instruction
shr xeq0,#9 'Dst->Src
movd :Opc,xeq0 'Src! to Dst of Opc
add pc,#4
:Opc rdlong 0-0,pc
add pc,#4 'Skip long




Phil:

Phil Pilgrim (PhiPi) said...

Of course, implementing the latter point disturbs the zero flag, but that's a pretty minor thing.


First I was thinking similar, but then I have changed my LMM core to not affect any Flags. That makes it much easier for the Application code.

Andy

Phil Pilgrim (PhiPi)
01-15-2008, 02:55 PM
Ariba,

How do you make your LoadLong conditional?

-Phil

Ym2413a
01-15-2008, 03:10 PM
This is truely interesting.
I personally like the idea of the VM program being anywhere in memory and can be loaded and run.
Addresses and native code that are relocatable in memory open the door for all types of complex programs.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Share the knowledge: propeller.wikispaces.com (http://propeller.wikispaces.com)
Lets make some music: www.andrewarsenault.com/hss (http://www.andrewarsenault.com/hss)

Ariba
01-15-2008, 03:15 PM
My Loadlong is not conditional (as the only one of the LMM instructions), or better: its conditional only if you use a long value < 18 bits. I never had a Problem with this and have coded a lot with LMM in the last weeks.
If really necessary I would simply use a conditional Fjump, or Fbranch before the LoadLong (Fbranch is my 1 long jump instruction with a jump destination of +-1024 addresses). The Fjump uses also the next long, but with an absolute address in this long. That's no problem because also the PropII has not more the 18 bits for the address.

I use the Cog Ram for a little Stack and a lot of registers and a cache, so I don't want have jump, or constant tables in it.

Andy

Phil Pilgrim (PhiPi)
01-15-2008, 04:21 PM
Hippy,

You might want to recheck your code fragment that tests pc[1..0]. In your movd, _xeq0[src] (not _xeq0[dst]) gets copied to :Opc[dst]. Here's something that might work:




_ld mov _scr,_pc
and _scr,#3
tjz _scr,#$+2
mov _xeq0,_xeq1
shr _xeq0,#9
movd _xeq0,:ld
nop
:ld rdlong 0-0,_pc
sub _pc,#4
andn _pc,#3
jmp #_go




Here's another version (longer code, quicker execution):




_ld mov _scr,_pc
and _scr,#3
tjz _scr,#:use1

:use0 shr _xeq0,#9
movd :ld0,_xeq0
nop
:ld0 rdlong 0-0,_pc
sub _pc,#4
jmp #_xeq0+1

:use1 shr _xeq1,#9
movd :ld1,_xeq1
nop
:ld1 rdlong 0-0,_pc
sub _pc,#4
jmp #_xeq1+1




-Phil

hippy
01-16-2008, 01:03 AM
Thanks Phil. I also like the "longer code" solution. How many times the code has to be added is
the determination for short/slow or longer/fast there.

I have gone back to the traditional LMM because I still have a mental block with descending pc
addresses. The same problem occurs with an unrolled loop, more so because the pc is always
long aligned. The solution there is not to use two "add pc,#4" but "add pc,#7" then "add pc,#1"
as with the reversed LMM which allows _zeq0 and _xeq1 as last executed native opcode to be
determined. So you had a 'double-whammy' of a good idea.

On handling conditionals, I do that by a conditional skip of the next multi-instruction LMM code
which is quite a lot of overhead. My philosophy is to make LMM easy to write and leave getting
efficiency down to the LMM writer; if they pre-load a register with a long they can use normal
conditional register 'mov' later at maximum LMM speed. If they want an easy life, they have
to suffer the inefficiency consequences.

I think it has to be accepted that LMM isn't PASM so there always will be some inefficiency but
as efficient as possible is the ultimate goal. With the right tools that becomes so much easier.

I'm going to put some serious effort into seeing if I can create a single source to develop the
LMM kernel using traditional LMM and dynamically change that to reversed LMM at run-time.

Phil Pilgrim (PhiPi)
01-16-2008, 01:28 AM
Here's a load long that's shorter and quicker still:




_ld mov _scr0,_pc
mov _scr1,_pc
sub _pc,#4
and _scr0,#3
tnjz _scr0,:use1

:use0 xor _xeq0,_fix
jmp #_xeq0

:use1 xor _xeq1,_fix
jmp #_xeq1
...
_fix long (%010111_0001_1111 ^ %000010_0010_1111) << 18 | (_scr1 ^ _ld)





It works by converting the jmpret <reg>,#_ld nr at _xeq0/1 to a rdlong <reg>,_scr1 in situ and then jumping there (assuming I've got the bits right). For your forward VM change the sub _pc,#4 to an add.

-Phil

Post Edited (Phil Pilgrim (PhiPi)) : 1/15/2008 7:34:22 PM GMT

hippy
01-16-2008, 10:25 AM
Hmmmm .... Using relative branching with offset in the unused destination reg of the LMM call
jmp instruction and I'm getting notable deterioration over having the destination address in
the following long. This is using the traditional LMM with 2 x rdlong.

Once again it's LMM code size versus execution speed and a delicate balance of exactly what
LMM code there is.

Phil Pilgrim (PhiPi)
01-16-2008, 11:20 AM
For relative branching, I'm just using add _pc,#offset and sub _pc,#offset: no LMM call necessary. This works for a range of +/-128 instructions. I'm still going with the jump table for longer jumps, which keeps the instruction lengths of branches and jumps the same (i.e. a single long). This is important for me, because it allows the preprocessor to replace jumps with relative branches wherever it can without things moving around in the process. Hopefully, the number of long jumps will be so minimal that the jump table can be kept small.

-Phil

hippy
01-16-2008, 11:50 AM
I forgot about add/sub pc,#offset http://forums.parallax.com/images/smilies/smile.gif

I went for +/-512 ( separate LMM handlers for forward of backwards jumps ), which could be +/-640 if +/-128 are add/sub pc,#offset.

So many different ways things can be done http://forums.parallax.com/images/smilies/smile.gif http://forums.parallax.com/images/smilies/smile.gif

Phil Pilgrim (PhiPi)
01-16-2008, 12:16 PM
Most interesting! I hadn't thought about using a VM-mediated relative branch. The extended range would definitely help to reduce my jump table size!

-Phil

Phil Pilgrim (PhiPi)
01-17-2008, 04:52 AM
Here are a couple benchmarks for RevLMM vs. Spin. In the first one, a pin is toggled using subroutine calls. A call for high, followed by a call for low, ad infinitum:




CON

_clkmode = xtal1 + pll16x
_xinfreq = 5_000_000

VAR

word stack[512]

OBJ

vm : "lmm_vm"

PUB start

stack[ 0 ] := @@0
stack[ 1 ] := 512
stack[ 2 ] := @my_prog
vm.start(@stack)

dira~~
repeat
go_hi
go_lo

PRI go_hi

outa~~

PRI go_lo

outa~

DAT

jmp vm#_ret 'Return from lo.
lo mov outa,#0 'Start of lo: lower pin 1.

jmp vm#_ret 'Return from hi.
hi mov outa,_0x0000_0001 'Start of hi: raise pin 1.

jmpret jmp_tbl_00,vm#_jmp_lo nr 'Loop back to loop.
jmpret jmp_tbl_01,vm#_call_lo nr 'Lowwer pin.
loop jmpret jmp_tbl_00,vm#_call_hi nr 'Raise pin.

mov dira,#1 'Start of prog: set pin 0 to output.

org $1ed 'Addr of tables in VM cog.
_0x0000_0001 long 1 'Literal table.

jmp_tbl_00 word @loop,@hi 'Jump table.
jmp_tbl_01 word @lo,0

my_prog word 2,1 'Jump table size, Literal table size.




Here's a scope trace of the output: RevLMM in yellow, Spin in blue. LMM is faster by about 8.5:1. This demonstrates the performance hit entailed by too many kernel calls.

http://forums.parallax.com/attachment.php?attachmentid=51478

Things improve when the real work is done by inline code. In this example the pin is toggled in a simple loop: no subroutine calls and a quick relative branch back to the beginning:




CON

_clkmode = xtal1 + pll16x
_xinfreq = 5_000_000

VAR

word stack[512]

OBJ

vm : "lmm_vm"

PUB start

stack[ 0 ] := @@0
stack[ 1 ] := 512
stack[ 2 ] := @my_prog
vm.start(@stack)

dira~~
repeat
outa~~
outa~

DAT

add vm#_pc,#12
mov outa,#0
loop mov outa,#1
mov dira,#1 'Start of prog: set pin 0 to output.

org $1ed 'Addr of tables in VM cog.
_0x0000_0001 long 1 'Literal table.

jmp_tbl_00 word @loop,@hi 'Jump table.
jmp_tbl_01 word @lo,0

my_prog word 2,1 'Jump table size, Literal table size.




Here's the scope trace, showing the oft-cited performance ratio of 29:1:

http://forums.parallax.com/attachment.php?attachmentid=51479

-Phil

Post Edited (Phil Pilgrim (PhiPi)) : 1/16/2008 9:08:48 PM GMT

Jasper_M
01-18-2008, 05:47 AM
WOW! This technique can also be used in GFX drivers, to get scanline buffers etc. to Cog RAM. I'll prolly use this in my next GFX driver if/when I decide to make one.

Ken Peterson
03-29-2008, 05:11 AM
I just ran across this thread and was intrigued about Phil's thought about using PHSA for the PC. I didn't read the rest of the thread in detail, but I didn't see anyone mentioning the use of PHSA anymore. If you set FRQA to -4, couldn't you still use PHSA for a PC and reverse-execute the program? Seems the execution loop might be simpler then.

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔


The more I know, the more I know I don't know.· Is this what they call Wisdom?

hinv
03-29-2008, 08:32 AM
Hi Phil,

What scope do you use? I really like your trace images, and I am hoping that they were easy to get from the scope.
What is the hightest bandwidth you are getting from hub memory to cogram?

Thanks,
Doug

Cluso99
06-10-2008, 01:35 AM
Hi All,

I have just been contemplating overlays and came across this thread.

My thoughts were to use short routines that can be loaded as an overlay when required, but compiles as a normal cog object. The idea is you have an initialisation overlay, that sets the scene for overlaying (loading routines), followed by separately numbered overlays as other cog objects. Therefore,·the spin complier should function without modification. No code from the routine(s)·is executed while loading the new routine(s).

I have thought about reverse loading but still have not achieved a method that works within the sweet spot. I would have the loaded at the top of cog ram ($1E0-$1EF) or maybe less. To call an overlay would be a simple method of providing the address and length to be loaded and at the end of loading would execute a jump to $0.

This concept is a simpler version as used on Mini-computers in the early 70's, so it's not patentable.

I have included an untested·sample of concept. There is no reason that the overlay has to reside at cog address $0, although it would be·a better standard to use. A lookup table for the overlays would probably be useful as well !

Any suggestions or comments greatly appreciated. http://forums.parallax.com/images/smilies/confused.gif

Post Edited (Cluso99) : 6/9/2008 5:41:20 PM GMT