Great advice Dave - I'll adjust the code. Re the comment blocks I'm doing some very similar code in a Catalina IDE where it changes the color of the text so I should be able to reuse that. It is just a flag that is set true or false as the comment blocks are encountered.
Here we go.
* upper or lower case
* ignores anything in a comment block
* adds the CON section
This is my test program - I added a dummy CON section
''********************************************
''* Full-Duplex Serial Driver v1.2 *
''* Author: Chip Gracey, Jeff Martin *
''* Copyright (c) 2006-2009 Parallax, Inc. *
''* See end of file for terms of use. *
''********************************************
{-----------------REVISION HISTORY-----------------
v1.2 - 5/7/2009 fixed bug in dec method causing largest negative value (-2,147,483,648) to be output as -0.
v1.1 - 3/1/2006 first official release.
}
CON
testvariable = 1
VAR
long cog 'cog flag/id
long rx_head '9 contiguous longs
long rx_tail
and this is the modified program. Is this the right format for stubs?
Dim LineofText As String
Dim Textarray(30000) As String
Dim FileLength As Integer
Dim FileCounter As Integer
Dim i As Long
Dim FileNamePath As String
Dim BinaryFileLength As Long
Dim BinaryFileCounter As Long
Dim b As Byte
Dim Commentflag As Boolean
Dim Constantflag As Boolean
Dim LeftThreeChar As String
FileNamePath = TextBox20.Text ' c:\testfile.spin
Dim FileRead As New FileStream(FileNamePath, FileMode.Open, FileAccess.Read)
Dim br As New BinaryReader(FileRead) ' binary reader
' spin files are in an unusual format, some characters >127 and many zeros
Commentflag = False ' set by { or {{ and reset by { or }}
Constantflag = False ' set by CON, reset by PUB or PRI or VAR or DAT
LineofText = ""
BinaryFileLength = br.BaseStream.Length() - 1 ' get binary file length
For i = 0 To BinaryFileLength
b = br.ReadByte() ' read the byte
If b >= 32 And b <= 127 Then ' discard all special characters
LineofText += Strings.Chr(b) ' add to line
End If
If b = 13 Then ' carriage return
Textarray(FileCounter) = LineofText ' store the line
FileCounter += 1 ' add one to counter
LineofText = "" ' clear the line
End If
Next
FileRead.Close() ' close the input file
FileLength = FileCounter - 1 ' get the file length
FileOpen(1, TextBox21.Text, OpenMode.Output) ' open the output file
For i = 0 To FileLength
LineofText = Textarray(i) ' get the line
LeftThreeChar = Strings.UCase(Strings.Left(LineofText, 3)) ' left three characters
If Strings.Left(LineofText, 1) = "{" Or Strings.Left(LineofText, 2) = "{{" Then
Commentflag = True ' is in a comment block
End If
If Strings.Left(LineofText, 1) = "}" Or Strings.Left(LineofText, 2) = "}}" Then
Commentflag = False ' comment block finished
End If
If LeftThreeChar = "CON" Then
Constantflag = True ' check for the CON section
End If
If LeftThreeChar = "PUB" Or LeftThreeChar = "PRI" Or LeftThreeChar = "VAR" Or LeftThreeChar = "DAT" Then
Constantflag = False ' reset by next block
End If
If Commentflag = False Then ' not a comment block
If LeftThreeChar = "PUB" Or LeftThreeChar = "PRI" Or Constantflag = True Then
PrintLine(1, LineofText) ' save if starts with PUB or PRI
End If
End If
Next i
FileClose(1)
I'm about ready to provide a "Little Big SPIN" for testing tonight or tomorrow. The early edition will allow for testing files against the SPIN standard without external memory. A BIGSPIN flag will allow testing with one of the JCACHED memory interfaces.
That is C3 & SDRAM users can begin testing 64KB spin programs. If David Betz can port the DracBlade to the JCACHED interface, testing can begin there too. I could do a special DracBlade version at some point that does not use cache just for performance comparisons to answer Ross' burning question.
If David Betz can port the DracBlade to the JCACHED interface, testing can begin there too.
It's done! I just got ZOG running on Dracblade tonight. Well, I need to update the pin definitions for the TV, keyboard, and SD card but that shouldn't be too difficult as long as the SD card SPI interface just uses a pin for CS and not more complex logic like the C3.
I finally got the changes working with linkit. It will not replace any object in the parent with the objects in a child binary. The toughest part was handling the VAR offset, but I think I got it right. The attached zip file contains the new linkit source and DOS executable. linkit now accepts an object number, or it will default to replacing the last object in the parent. I created a version of the SpinSim demo that uses stubs for conio and fileio. The batch file runit1.bat will link the conio and fileio binaries to the demo program. It runs the following commands:
conio.binary replaces object number 1 and fileio.binary replaces object number 2.
This version of linkit supports images up to 64K in size, and it uses the standard 16-byte Spin binary file header.
BTW, a -p option at the end of the command line will print debug information, but it is probably mostly useless except to me. Now the next step is to implement the 32-bit version for really large memory images.
Steve, I'm looking forward to seeing your interpreter once it's done. I'm planning on working on some of my other projects for a while -- maybe I'll actually finish one.
Dr_Acula, good work on your program that extracts the object stub information. It looks good.
I went back and looked at the LMM version of the Spin interpreter. The last time I worked on it it was running about 50% slower than the standard interpreter. It now runs about 33% faster than the standard interpreter, which is about what the 2-Cog version does. Of course the performance was measured running the Dhrystone benchmark program, which is what I used to tune it. The performance will be a bit less on other programs.
Steve, I'm looking forward to seeing your interpreter once it's done. I'm planning on working on some of my other projects for a while -- maybe I'll actually finish one.
Dr_Acula, good work on your program that extracts the object stub information. It looks good.
I went back and looked at the LMM version of the Spin interpreter. The last time I worked on it it was running about 50% slower than the standard interpreter. It now runs about 33% faster than the standard interpreter, which is about what the 2-Cog version does. Of course the performance was measured running the Dhrystone benchmark program, which is what I used to tune it. The performance will be a bit less on other programs.
Dave
@Dave, 33% improvement is good At some point LMM will need to access the cache routines. Having an LMM primitive instead of interpreted LMM perform those accesses would be important.
@Dr_A, please provide a C translation of your program when you can for linux users who don't really want to run VB.
I keep finding reasons to have BigSpin code live at some address > $FFFF so that the BigSpin data space transfers to/from HUB space is transparent to users. At this point I have to hack in @var+$1000_0000 or something to define a HUB address for the interpreter.
@Dave, how hard would it be for a 32 bit linkit to add $10000 (or some other bigger value) to all PC, Stack, and Data references? It may be more complicated ... hard to tell just now.
The Spin interpreter is using about 390 longs of cog memory, so there is quite a bit of space available for the memory access routines. I'm using a 256-byte jump table in hub RAM, which allows me to jump to optimized instruction routines in the first 256 longs of cog memory. The second half of the cog memory contains the LMM interpreter and helper routines.
Steve, I'm confused about your request to have linkit add the $10000 offset. This would normally be done when you relocate the binary from address 0 to some other address. With the standard 16-byte header we would just add the starting address to PBASE, VBASE, DBASE, PCURR and DCURR in the header. These are located at words 3 through 7. Of course, the new values won't fit in the 16-bit locations if we add $10000, so maybe the initial loop in the interpreter can add the offset instead. Is that what you're currently doing?
I'm currently just adding $10000000 to any HUB reference that is required for Spin/PASM interaction.
The problem with that of course is having to add that to every variable reference.
I guess i'll be forced to move to 32 bit spin faster than I would like to add the code base address.
Knowing the 16 bit version works first will make the 32 bit transition easier.
The cache code will fit comfortably in your LMM if when you're ready to go with that. It will be a while before my interpreter is optimized. There may be some designs that will benefit from embedding all code in a COG, but everything will have to be read/modify/write on the outputs. Having the hardware access in a separate cog makes many things easier for including allowing big drivers.
You can still use the standard 16-bit header in the binary file and add the offset at the beginning of the interpreter when it copies the 5 pointers from the header. The code would look like this:
org 0
mov x,#$1F0-pbase 'entry, load initial parameters
mov y,par
:loop add y,#2
:par rdword pbase,y
:par1 add pbase, addr_offset
add :par, incr_dest 'inc d lsb
add :par1, incr_dest 'inc d lsb
djnz x,#:loop
cogid id 'set id
jmp #loop
addr_offset long $10000000
incr_dest long $200
'
'
' Main loop
'
loop mov x,#0 'reset x
This does require 4 extra longs. Of course, once you do that you will need to increase the four values in the stack frame by writing them as longs instead of words.
'
'
' drop anchor
'
j0 or op,pbase 'add pbase into flags
wrlong op,dcurr 'push return pbase (and flags)
add dcurr,#2
wrlong vbase,dcurr 'push return vbase
add dcurr,#2
wrlong dbase,dcurr 'push return dbase
add dcurr,#2
wrlong dcall,dcurr 'push dcall (later used for pcurr)
mov dcall,dcurr 'set new dcall
add dcurr,#2
jmp #push 'init 'result' to 0
You will need to make some changes in the code that performs the calls and the returns. I think that's all you'll need to change. Watch out for a trick used in the calling code when calling a method in another object. It reads the new object's PBASE and VBASE with a single rdlong instead of two rdword instructions. The new PBASE will end up with the VBASE bits in the 16 most significant bits. You will want to use two rdword instructions or mask off the upper 16 bits before adding it to PBASE.
I created a 32-bit version of the spin interpreter. It works with the standard 16-bit binary files, but it can run Spin code that is located above $10000. Of course, I needed a way to test it out so I modified SpinSim to support more than 64K of memory. It now has a -m option that specifies the size of the RAM in kilobytes. You can download the latest version of SpinSim at http://forums.parallax.com/showthread.php?127976-Spin-Simulator .
The source for the 32-bit interpreter is interp32.spin in the zip file. The file diff.txt shows the differences between this and the original interpreter source. The changes are mostly in the startup code at the beginning and the code that builds the stack frame (i.e., "drop anchor"). The pointers were changed from 16-bit to 32-bit values. There were also a few places where PBASE was anded with $FFFF or $FFFC, and another place where the upper 16-bits contained VBASE.
I changed the startup code so that it reads the starting address location from a hub RAM variable pointed to by PAR. PAR is initialized with a 14-bit long address, so it cannot contain a 32-bit address. The startup code adds the 32-bit starting address to the 16-bit values it reads from the header.
I tested interp32 by modifying the BigSpin demo program so the "run" commands execute programs at location $20000. The modified version is call demo32. It can be run as follows:
spinsim demo32.binary -m256
This will run it in a 256K hub RAM. While running demo32 type "run hello.binary" and it will run the hello program at $20000.
I created a 32-bit version of the spin interpreter.
I'm almost there with overlay and external memory hooks.
My changes are consistent with your diffs.
Made a lot of progress this morning so far.
I'll have to stop in an hour though to go look at paint
I lost my primary workstation today and this morning's work is gone. I do have a backup from a few days ago. It will take a while to recover. Now I'm very sorry I have not posted anything here.
I lost my primary workstation today and this morning's work is gone. I do have a backup from a few days ago. It will take a while to recover. Now I'm very sorry I have not posted anything here.
Is there any chance you could remove the hard drive from the dead machine and install it in a working one to recover the data? Another thing I've done in a similar situation is removing the hard drive from the dead computer and installing it in an external USB hard drive enclosure. Or was it the hard drive itself that died?
Anyway, sorry to hear this! It's quite frustrating to lose work and it sounds like you were making good progress.
Anyway, sorry to hear this! It's quite frustrating to lose work and it sounds like you were making good progress.
I do have a USB enclosure; I might try that. The laptop seems fine other than hard-drive.
I'm back to where I was this morning + some :sick:
I believe coginit is working - I had to write another XMEM -> HUB block load routine.
Can't print anything just yet. Maybe tomorrow.
consider dropbox.
It helps against disk failures and and solves PC sync issues with no effort. If you wan to try it ask any user in the forum for a referral code. You both get a little extra space...
Massimo
consider dropbox.
It helps against disk failures and and solves PC sync issues with no effort. If you wan to try it ask any user in the forum for a referral code. You both get a little extra space...
Massimo
Excellent suggestion! I use dropbox for all of my microcontroller work. The files tend to be pretty small so it doesn't take long to sync them and I don't have to worry about losing anything because not only are my files on both my PC and dropbox.com, they are also on every other PC I have in my house since I sync multiple machines to the same dropbox account. So, if dropbox.com goes away, I still have backups on multiple of my own machines. I have yet to exhaust their free 2gb account.
The great and cruel irony is that I was using my offline storage to save files
when my hard drive disappeared. Danged if you do danged if you don't.
Right now I'm running a program that expects communications on address $34
and the cache mailbox just happens to also live at address $34. Not good. The
reason this is happening is because I'm loading the program from sdcard to
external memory and the comm addresses just happen to be the same.
It doesn't matter what "virtual address" the code is running from, it still needs
access to the physical hub address for intra-cog communications. A cog could
interact with the external memory, but that's a lot of burden for a full, busy cog.
So, one of the BigSpin issues is separating the variable space for program <-> cog
communications.
We solved this 2 different ways with zog. One was allocated, the other was linker
mapped. Linker mapped won in the end because it was more efficient. Most likely
COM data structures for stuff like FullDuplexBlah will get some special addresses.
This is unfortunate, but i don't see an easy way out. Maybe my sight is dim today.
I don't know if this is a good idea or if this is the way they intend dropbox to be used but I have my working directory on dropbox. Every edit I make gets immediately backed up. I don't run an occasional backup. I have continuous backups this way. However, it does mean that object files and executables get backed up as well which is probably wasteful of both space on dropbox and network bandwidth.
Guys with disappearing code problems, Dave for example,
Every edit I make gets immediately backed up. I don't run an occasional backup. I have continuous backups this way.
This is better than nothing I guess but you still need backups. What happens when something weird happens, your file gets saved as garbage or zero length or what? Poof it's faithfully copied to DropBox and all your work is gone.
Better to use a proper version control system then you have backups and a version history that you can unwind when you realize what you are doing is gibberish or it gets corrupted.
It occurs to me that as we are working on open source projects we should be better off to use some proper version control, both locally and on an external server like for example the place where the ZPU tool chain is kept, http://repo.or.cz as far as I know it's free and just works. Committing umpteen versions as you work to a git branch is dead easy. I've been meaning to put Zog up there for a long time in fact the Zylin guys suggested that at some point Zog and the ZPU in C could be merged into the ZPU tree.
Once, years ago, I formatted a USB memory stick as an EXT3 files system and tried to read it in 1) The workstation in my office, 2) My laptop, a backup device, 3) My home PC, another backup device. How stupid do you think I felt when I discovered that that had triggered the same bug in Linux on all three machines causing the hard drives in all three machines to be corrupted?!
Luckily my not so old code was also on a version control server. Now a days I use the forums as a backup for my personal projects:)
You're right that we should be using some sort of version control system. I've used CVS in the past and various systems that my companies have required but I haven't tried GIT yet. I guess I should look into it. Anyway, this discussion seems to have gotten off track from the original topic and I'm afraid I probably led it that way. Sorry!
hello_lbs.zip - A Little BigSpin hello world demo application
Both archives are compilable with Propeller Tool, Homespun, or BST/C.
The Little BigSpin Interpreter will load and run the hello demo from SDCARD to external memory. The only external memory supported at this time is the SdramCache.spin hardware. The cache interface is based on the JCACHED_MEMORY model, so C3 or DracBlade cache drivers by David Betz could be used with the interpreter. At some point the the simulator can also use the interface.
Some instructions for this demo are enumerated below.
A file called userdefs.spin is included with hello_lbs.zip. It should be modified for your hardware before building hello.bin. The default is SDRAM module pin configuration.
The hello demo must be compiled to a binary and saved to SDCARD as "hello.bin"
After saving the hello.bin file, compile and run (F10) the lbs.spin file.
The program will start in a moment. You can check diagnostic messages on your serial port.
The result will be "Hello World" scrolling on the TV with the application run from external memory.
Please note that a special BigTvText.spin is used as the TV interface.
BigTvText.spin uses BigHelper.spin to place key variables in HUB from external memory.
It should be noted that the hello demo code is 100% SPIN/PASM.
The lbs.spin code is not very pretty and is somewhat inefficient right now. Some comments at the top of lbs.spin are not relevant. Pardon my mess.
The lbs.spin loader/interpreter could be programmed to EEPROM (F11) for stand-alone operation.
Some optimizations are possible with this. The puzzle of how to map HUB space still needs work.
I haven't used linkit yet, so I guess these files just demonstrate proof of concept for executing spin bytecodes from external memory. I have to take a break for a few days and will look at using linkit and integrating a simulated external cache memory later.
It looks like you've made some good progress. I may have to buy a C3 so I can start doing some external memory stuff. I could also add external memory emulation to SpinSim, though it could be simulated with Spin/PASM code.
The simulator does treat the hub address a bit different when the memory size is greater than 64K. At 64K (or 32K, which is really the same thing) it masks off the upper 16 bits of the address. The only special case where it isn't masked off is the system I/O address of $1234000X, which is used for conio and fileio. When the hub RAM size is greater than 64K I no longer mask off the upper address bits. The normal Spin interpreter won't work correctly under this model if there are any VAR variables defined and a method in an external object is called. This is because the standard interpreter adds the VAR offset to the upper 16 bits of the new PBASE. It's a programming trick to save one long instruction.
The INA, OUTA and DIRA registers really don't do anything in the simulator currently, but I could add some simulated memory support. I suspect I'll need to add other simulated I/O widgets in the future.
The C3 has 64K SPI RAM, so what I've posted could be used with that when C3 is ported. The more interesting C3 external memory is flash at 1MB. The interpreter I've provided should not be used with Flash though because all application stack, data, and code live in the external memory; Flash would probably wear out quickly.
The problem with C3 or any serial RAM is speed.
It's probably better to use one of the existing parallel data bus external memory solutions with BigSpin as it is today - Nick should be releasing a new SDRAM module this week for the PropellerPlatform. Getting a DracBlade port running should be a high priority. I don't have hardware, but others do.
I'm not sure what it will take to get spinsim working with the cache, but I'll spend a little time on that soon since it is the most common platform. I could peek at it a little this morning depending on my grandson's schedule.
Regarding the address spaces. I could have made the code space base address $1000000, but that turned out to be 2 wasted longs. The interpreter has to decode a separate address for HUB and it was just as easy to make the HUB base address $10000000. There is some difficulty with HUB space right now regarding address map as I've noted in the comments - a better solution is necessary. Unfortunately existing HUB device interface objects will need to use wrappers from the BigHelper.spin file for application/HUB communications.
Getting some kind of a RAM loader working for real hardware is a high priority to allow easier application development - xyzmodem is not a usable option imho. David Betz created a loader for ZOG that can be ported for use with BigSpin. It offers a boot-loader and runner for SDcard and RAM for the SDRAM module and DracBlade. It also supports C3 Flash for the day when some BigSpin version can use flash. Catalina's payload program could also be used for a boot loader eventually when Catalina supports JCACHED_MEMORY - maybe the nasty dependency mess is resolved now that homespun has a #include directive.
Comments
I'll add the CON section and test it out. BRB
* upper or lower case
* ignores anything in a comment block
* adds the CON section
This is my test program - I added a dummy CON section
and this is the output
and this is the modified program. Is this the right format for stubs?
That is C3 & SDRAM users can begin testing 64KB spin programs. If David Betz can port the DracBlade to the JCACHED interface, testing can begin there too. I could do a special DracBlade version at some point that does not use cache just for performance comparisons to answer Ross' burning question.
--Steve
Do you have some code? Was it easy to port across?
[addit - the CS is a dedicated propeller pin]
linkit demo.binary -1 conio.binary out1.binary
linkit out1.binary -2 fileio.binary out2.binary
conio.binary replaces object number 1 and fileio.binary replaces object number 2.
This version of linkit supports images up to 64K in size, and it uses the standard 16-byte Spin binary file header.
BTW, a -p option at the end of the command line will print debug information, but it is probably mostly useless except to me. Now the next step is to implement the 32-bit version for really large memory images.
Dave
I might be able to get back at it tomorrow night.
Sorry it's not ready.
Dr_Acula, good work on your program that extracts the object stub information. It looks good.
I went back and looked at the LMM version of the Spin interpreter. The last time I worked on it it was running about 50% slower than the standard interpreter. It now runs about 33% faster than the standard interpreter, which is about what the 2-Cog version does. Of course the performance was measured running the Dhrystone benchmark program, which is what I used to tune it. The performance will be a bit less on other programs.
Dave
@Dave, 33% improvement is good At some point LMM will need to access the cache routines. Having an LMM primitive instead of interpreted LMM perform those accesses would be important.
@Dr_A, please provide a C translation of your program when you can for linux users who don't really want to run VB.
I keep finding reasons to have BigSpin code live at some address > $FFFF so that the BigSpin data space transfers to/from HUB space is transparent to users. At this point I have to hack in @var+$1000_0000 or something to define a HUB address for the interpreter.
@Dave, how hard would it be for a 32 bit linkit to add $10000 (or some other bigger value) to all PC, Stack, and Data references? It may be more complicated ... hard to tell just now.
Steve, I'm confused about your request to have linkit add the $10000 offset. This would normally be done when you relocate the binary from address 0 to some other address. With the standard 16-byte header we would just add the starting address to PBASE, VBASE, DBASE, PCURR and DCURR in the header. These are located at words 3 through 7. Of course, the new values won't fit in the 16-bit locations if we add $10000, so maybe the initial loop in the interpreter can add the offset instead. Is that what you're currently doing?
Dave
The problem with that of course is having to add that to every variable reference.
I guess i'll be forced to move to 32 bit spin faster than I would like to add the code base address.
Knowing the 16 bit version works first will make the 32 bit transition easier.
The cache code will fit comfortably in your LMM if when you're ready to go with that. It will be a while before my interpreter is optimized. There may be some designs that will benefit from embedding all code in a COG, but everything will have to be read/modify/write on the outputs. Having the hardware access in a separate cog makes many things easier for including allowing big drivers.
The source for the 32-bit interpreter is interp32.spin in the zip file. The file diff.txt shows the differences between this and the original interpreter source. The changes are mostly in the startup code at the beginning and the code that builds the stack frame (i.e., "drop anchor"). The pointers were changed from 16-bit to 32-bit values. There were also a few places where PBASE was anded with $FFFF or $FFFC, and another place where the upper 16-bits contained VBASE.
I changed the startup code so that it reads the starting address location from a hub RAM variable pointed to by PAR. PAR is initialized with a 14-bit long address, so it cannot contain a 32-bit address. The startup code adds the 32-bit starting address to the 16-bit values it reads from the header.
I tested interp32 by modifying the BigSpin demo program so the "run" commands execute programs at location $20000. The modified version is call demo32. It can be run as follows:
spinsim demo32.binary -m256
This will run it in a 256K hub RAM. While running demo32 type "run hello.binary" and it will run the hello program at $20000.
Dave
I'm almost there with overlay and external memory hooks.
My changes are consistent with your diffs.
Made a lot of progress this morning so far.
I'll have to stop in an hour though to go look at paint
Anyway, sorry to hear this! It's quite frustrating to lose work and it sounds like you were making good progress.
I'm back to where I was this morning + some :sick:
I believe coginit is working - I had to write another XMEM -> HUB block load routine.
Can't print anything just yet. Maybe tomorrow.
It helps against disk failures and and solves PC sync issues with no effort. If you wan to try it ask any user in the forum for a referral code. You both get a little extra space...
Massimo
when my hard drive disappeared. Danged if you do danged if you don't.
Right now I'm running a program that expects communications on address $34
and the cache mailbox just happens to also live at address $34. Not good. The
reason this is happening is because I'm loading the program from sdcard to
external memory and the comm addresses just happen to be the same.
It doesn't matter what "virtual address" the code is running from, it still needs
access to the physical hub address for intra-cog communications. A cog could
interact with the external memory, but that's a lot of burden for a full, busy cog.
So, one of the BigSpin issues is separating the variable space for program <-> cog
communications.
We solved this 2 different ways with zog. One was allocated, the other was linker
mapped. Linker mapped won in the end because it was more efficient. Most likely
COM data structures for stuff like FullDuplexBlah will get some special addresses.
This is unfortunate, but i don't see an easy way out. Maybe my sight is dim today.
Happy Super Bowl state-side.
This is better than nothing I guess but you still need backups. What happens when something weird happens, your file gets saved as garbage or zero length or what? Poof it's faithfully copied to DropBox and all your work is gone.
Better to use a proper version control system then you have backups and a version history that you can unwind when you realize what you are doing is gibberish or it gets corrupted.
It occurs to me that as we are working on open source projects we should be better off to use some proper version control, both locally and on an external server like for example the place where the ZPU tool chain is kept, http://repo.or.cz as far as I know it's free and just works. Committing umpteen versions as you work to a git branch is dead easy. I've been meaning to put Zog up there for a long time in fact the Zylin guys suggested that at some point Zog and the ZPU in C could be merged into the ZPU tree.
Once, years ago, I formatted a USB memory stick as an EXT3 files system and tried to read it in 1) The workstation in my office, 2) My laptop, a backup device, 3) My home PC, another backup device. How stupid do you think I felt when I discovered that that had triggered the same bug in Linux on all three machines causing the hard drives in all three machines to be corrupted?!
Luckily my not so old code was also on a version control server. Now a days I use the forums as a backup for my personal projects:)
Attached are 2 files:
- LittleBigSpin.zip - Little BigSpin Interpreter
- hello_lbs.zip - A Little BigSpin hello world demo application
Both archives are compilable with Propeller Tool, Homespun, or BST/C.The Little BigSpin Interpreter will load and run the hello demo from SDCARD to external memory. The only external memory supported at this time is the SdramCache.spin hardware. The cache interface is based on the JCACHED_MEMORY model, so C3 or DracBlade cache drivers by David Betz could be used with the interpreter. At some point the the simulator can also use the interface.
Some instructions for this demo are enumerated below.
A file called userdefs.spin is included with hello_lbs.zip. It should be modified for your hardware before building hello.bin. The default is SDRAM module pin configuration.
Please note that a special BigTvText.spin is used as the TV interface.
BigTvText.spin uses BigHelper.spin to place key variables in HUB from external memory.
It should be noted that the hello demo code is 100% SPIN/PASM.
The lbs.spin code is not very pretty and is somewhat inefficient right now. Some comments at the top of lbs.spin are not relevant. Pardon my mess.
The lbs.spin loader/interpreter could be programmed to EEPROM (F11) for stand-alone operation.
Some optimizations are possible with this. The puzzle of how to map HUB space still needs work.
I haven't used linkit yet, so I guess these files just demonstrate proof of concept for executing spin bytecodes from external memory. I have to take a break for a few days and will look at using linkit and integrating a simulated external cache memory later.
--Steve
It looks like you've made some good progress. I may have to buy a C3 so I can start doing some external memory stuff. I could also add external memory emulation to SpinSim, though it could be simulated with Spin/PASM code.
The simulator does treat the hub address a bit different when the memory size is greater than 64K. At 64K (or 32K, which is really the same thing) it masks off the upper 16 bits of the address. The only special case where it isn't masked off is the system I/O address of $1234000X, which is used for conio and fileio. When the hub RAM size is greater than 64K I no longer mask off the upper address bits. The normal Spin interpreter won't work correctly under this model if there are any VAR variables defined and a method in an external object is called. This is because the standard interpreter adds the VAR offset to the upper 16 bits of the new PBASE. It's a programming trick to save one long instruction.
The INA, OUTA and DIRA registers really don't do anything in the simulator currently, but I could add some simulated memory support. I suspect I'll need to add other simulated I/O widgets in the future.
Dave
The problem with C3 or any serial RAM is speed.
It's probably better to use one of the existing parallel data bus external memory solutions with BigSpin as it is today - Nick should be releasing a new SDRAM module this week for the PropellerPlatform. Getting a DracBlade port running should be a high priority. I don't have hardware, but others do.
I'm not sure what it will take to get spinsim working with the cache, but I'll spend a little time on that soon since it is the most common platform. I could peek at it a little this morning depending on my grandson's schedule.
Regarding the address spaces. I could have made the code space base address $1000000, but that turned out to be 2 wasted longs. The interpreter has to decode a separate address for HUB and it was just as easy to make the HUB base address $10000000. There is some difficulty with HUB space right now regarding address map as I've noted in the comments - a better solution is necessary. Unfortunately existing HUB device interface objects will need to use wrappers from the BigHelper.spin file for application/HUB communications.
Getting some kind of a RAM loader working for real hardware is a high priority to allow easier application development - xyzmodem is not a usable option imho. David Betz created a loader for ZOG that can be ported for use with BigSpin. It offers a boot-loader and runner for SDcard and RAM for the SDRAM module and DracBlade. It also supports C3 Flash for the day when some BigSpin version can use flash. Catalina's payload program could also be used for a boot loader eventually when Catalina supports JCACHED_MEMORY - maybe the nasty dependency mess is resolved now that homespun has a #include directive.