How do you pass data between multiable objects in multible COGs?
Charlie Johnson
Posts: 147
Attached is an non working (No propeller Dev hareware yet) outline of how I think it would be done.· Please enlighten me, as I feel I am way out in left field.
Thanks
Charlie Johnson
AKA dy8coke
Thanks
Charlie Johnson
AKA dy8coke
Comments
Post Edited By Moderator (Chris Savage (Parallax)) : 5/17/2006 1:05:51 AM GMT
There's no issue unless the variable is an array (more than 4 bytes) where you may run into a problem of reading data that is in the process of being updated.
-Martin
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Martin Hebel
Southern Illinois University Carbondale - Electronic Systems Technologies
Personal Links with plenty of BASIC Stamp info
StampPlot - Graphical Data Acquisition and Control
Thank you for the reply, I will be getting my Propsticks tomorrow so I will be able to play. I will be using the LockNew, LockSet, LockClr and LockRet to control access to the GPS data. I guess I am just getting hung up on the last sentance on page 4-166 of the SPIN Language Reference.· I guess I will just have to play with it tomorrow.
Thanks
Charlie
Post Edited (dy8coke) : 5/16/2006 8:59:08 PM GMT
-Martin
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10
I don't follow your analogy to the uncertainty principle. It seems to me there is a quantum measure of uncertainty with a status byte, while there is none at all with the lock. BTW, this follows up your great example with the ma
I think a single status bit is only useful/valid when one process (the server) is force-feeding another (the client). In such a scenario, the server sets the status bit when it has data ready. As long as this bit is set, it cannot further update the buffer. The client polls the status bit. When it becomes set, it reads the data buffer, then clears the bit, freeing the server to do another update.
For a more interactive approach, a fully-interlocked handshake can be used. This employs two status bits: RFD (ready for data) and DAV (data available). When the client needs data, it waits for DAV to become cleared. Then it sets the RFD bit. The server, seeing this bit set, and once it has data, sets the DAV bit. As long as this bit is set, the server cannot update the buffer. The client, seeing DAV set, then reads the data and clears its RFD bit. The server, seeing that RFD is clear, then clears the DAV bit.
Neither approach, by itself, is adequate when the server is constantly updating the buffer, and the client needs data only occasionally. For this you can use a semaphore or a modified interlocked handshake that uses two buffers: one for current data that gets updated continuously, and one for communication with the client and which is controlled by the handshaking.
With the Propeller, a valid reason to employ status bits is to conserve the limited lock resource. I ran into this with my dynamic memory manager but was able to solve it by using a single lock, which controlled 32 status bits. To access a status bit, you have to own the lock. Therefore these status bits actually form a set of sub-locks, protected by one master lock.
-Phil
In the case of the GPS, I'm curious, what method would you favor? Say both the reading and the writing of the buffer are asychronous. Do you see any reason a lock would _not_ work?
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Tracy Allen
www.emesystems.com
-Phil
Post Edited (Phil Pilgrim (PhiPi)) : 5/17/2006 1:32:16 AM GMT
Thanks for this disscussion...Can't wait for the propsticks to show up.
Charlie
I was more puzzled by the statement, "The reason you dont want to use a lock is it acts like Hiesenburg's Uncertainty Principle, you must affect the value to detect the value. Since you only want the cog connected to the GPS to control the update status's value, the lock won't work." Why won't work? It seems to me it would work easily, provided the caveat that Phii mentioned of perhaps needing double buffering.
Also, Charlie just mentioned having multiple consumers of the GPS data. With the semaphore lock, only one cog can have it at a time, so two cogs can't be either reading at exactly the same time. Each cog that wants to access the buffer attempts to set the lock. If the previous state was clear, it continues on with its routine and does its access, either read or write. And releases the lock when finished. If the previous state was set, it just skips the access routine and tries again later. Only one in control at a time. That could slow down response, because there is nothing otherwise wrong with simultaneous reading. Maybe there is an easy way around that, but whatever it is will certainly take more code, whereas the lock by itself is one instruction. The timing contraints would have to be awfully tight for that to make a difference.
I see how the _interlocked_ handshake status bits Phil suggested would work for the asynchronous process, but it gets more complicated when there are multiple receivers or senders. On the other hand, it seems to me, the semaphore lock is still "drop dead simple", even when there are multiple senders/receivers, provided timing constraints are not too tight.
The way I see the simple status byte fail is,
"Cog0 fetches the status byte, sees its not being updated, and proceeds to read the data on the next rotation."
appending this phrase,
", but during the time of that one rotation, Cog1 starts the update, setting the status to 'update in progress'".
So Cog0 and Cog1 at that instant think they are both in control. It is vastly more problematic for Spin, where there will be hundreds of hub rotations between reading the status and the next action.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Tracy Allen
www.emesystems.com
I admit that spin complicates the issue, but I am willing to bet that the execution time for the·copy operation is symmetric regardless of direction. If the writer is copying data into the GPS data, or a reader is copying data out of the data, the execution time would be the same. So I am nearly convinced at this point no control mechanism is needed if the data is accessed in one swift motion.
For the writer:
For the reader:
This should work for any arbitrary length of data, the amount of time each assignment takes will be the same, therefore the possibility of an unsynchronized·transfer of data doesn't exist, any interleaving of data between writer and reader will be done in a organized, non-corruptive manner;·determinism dictates this, and while spin isn't fully deterministic, it is when you're doing an apples to apples comparison.
Implementing handshaking, status byte, semaphores or any other control mechanism just creates an unessesary bottleneck in an attempt to cover a situation that wouldn't occur by using an agreed upon methodolgy of accessing the data.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10
Post Edited (Paul Baker) : 5/17/2006 5:48:37 PM GMT
Okay, I think I see what you're getting at: once a reader starts reading data, it will stay as many steps ahead of (or behind) the writer as when it started, so it can't get pieces of two different datasets. I'll buy it insofar as the reader and writer were able to access memory without the commutator intervening. But are you sure the commutator doesn't throw in a wrinkle of uncertainty that might screw up the process?
BTW, one "handshaking" method I forgot to include above is the one often employed with clock chips: read the data multiple times until you get two full datasets in a row which are identical.
-Phil
D. it should have LatA, LonA, HdgA at TimeA.
Charlie
So if a "A = B" is executed in spin, let's say·the first hub access is the "=" (operator), the second hub access is the address of·B (source), the 3rd hub access is copying the value at·B into local memory, the 4th hub access is the address of A (destination) and the 5th hub access is copying the local value of·B into global location B. So even though it takes 5 hub rotations to perform the operation, it should take 5 hub rotations each and every time a pure assignment operation occurs (physical global address to physical global access), whether the writer is doing it or the reader is doing it.
Now if there is a hub operation missed due to some computation done (which likely would occur during decoding or address computation) this should occur in the same span. Perhaps a picture would help:
So given this hypothical sequence for spin to do the assignment operation, there are 9 possible relationships possible between the reader and writer. The read examples 1 - 3 will get the old data, the read examples 5-9 will get the new data, and read sequence 4 will get the new or old data depending on the relationship between the writing cog and the reading cog.
As you can see there is never a situation where the writer "overtakes" a reader, or a reader "overtakes" a writer. They all behave in harmony irrespective of where they are in the sequence, or even how many hub delays occur (as long as the same number of hub delays is consistant between the reader and writer).
Charlie,
·What Im trying to get at is it doesnt matter how many longs there are, as long as the longs are written to or read from using no intervening spin commands (= after = after = for however many longs there are), you could be doing this over thousands of longs if nesessary. In fact the more number of longs there are, the more you want to stay away from using semaphores because it takes so long to access the entire datagram that the writer could actually get starved out of updating the datagram for much longer than it should. Starvation occurs when a high priority process (your cog doing the GPS update) keeps getting bumped by low priority processes (the cogs reading the GPS data) comandering the datagram. Lets say each long access takes 9 hub rotations as in the above example, and you have 16 longs of data, and you have 1 writer and 4 readers. Lets say the writer gets first access and fills up the data, that takes 144 hub rotations, then reader 1 reads it taking another 144 hub rotations, then right after that reader 2 gets the access for another 144 rotations, then reader 3, then reader 4. This will occur because the the readers will be higher up in the waiting cue (due to the predictable rotation of the commutor), so the writer must wait 576 hub rotations before it can update the data again. That is ALOT·of time wasted by all cogs involved waiting for no reason when everyone could be accessing it whenever they feel if they just adhere to a simple rule of accessing the datagram.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10
Writer:
GPS0 = WT0
GPS1 = WT1
...
GPSN = WTN
Reader:
RT[noparse][[/noparse]0] = GPS0
RT[noparse][[/noparse]1] = GPS1
...
RT[noparse][[/noparse]N] = GPSN
this is because there is a hidden addition operation in arrays where the array index is added to the array address base, and this could cause the writer to overtake the reader.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10
-Phil
Update: As an addendum, I'd also recommend adding copious comments to the program about what's been done. This is the kind of thing which, a couple months down the road, a person might forget and make innocent-looking but no-so-innocent-acting changes to. Also, it's the kind of thing that might break with updates to the compiler.
Post Edited (Phil Pilgrim (PhiPi)) : 5/17/2006 8:58:20 PM GMT
Even more reason for me to get my behind in gear and get to the dissasembler project so we can really look at the spin interpretor code (that would be in a more readable form).
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10
Post Edited (Paul Baker) : 5/17/2006 8:59:39 PM GMT
Yet another facet I didnt take into consideration (compiler updates). So perhaps a double buffer situation is best. This way the readers could all access a valid data set while the writer updates the other buffer.
▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
1+1=10