Shop OBEX P1 Docs P2 Docs Learn Events
A proposal to develop a standard for communicating with cogs from any language! - Page 9 — Parallax Forums

A proposal to develop a standard for communicating with cogs from any language!

145679

Comments

  • 4x5n4x5n Posts: 745
    edited 2011-12-09 07:35
    Heater. wrote: »
    So we are all agree then.

    Standards are wonderful things. We should all have one:)

    There are enough around that we can all pick our 3-4 favorite!! :smile:
  • Dave HeinDave Hein Posts: 6,347
    edited 2011-12-09 07:57
    Ariba wrote: »
    If you load the cog code as a binary blob, you can also just poke into this blob with hardcoded offsets to change the binary blob, before you start the cog. The offsets can you get from a BST listing, or just with a little Spin methode in the original object:
    In a previous post, I mentioned a technique that will work with all languages. The variables that need to be poked can be grouped together starting at cog location 1. Cog location 0 will contain a jmp instruction that skips over the block of variables. Location 0 can be used as a scratch register after the jmp is executed. The Spin, C, PropBasic or other high level languages just need to have an array or struct containing the values that are copied into the variable space of the cog image before starting the cog.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-12-09 09:06
    4x5n wrote:
    heater wrote:
    So we are all agree then.

    Standards are wonderful things. We should all have one:)
    There are enough around that we can all pick our 3-4 favorite!!
    I know these were meant tongue-in-cheek, but they dovetail nicely with what I think the role of a standards-keeper should be: namely that of a taxonomist. In this role, the ways programmers already do things get categorized and cataloged to make them accessible to other programmers. (If a standards-keeper really wants to go out on a limb, he can even give the various methods names; but even that would be considered presumptuous, requiring the formation of a naming committee. :) ) From there, the various advantages and disadvantages of each method for a given kind of problem can be debated. But I don't see a future for any "thou shalts" in this process. I guess my point is that it's fine to want to define a standard. But do it by example in your own programming. If it's something worthwhile, the community will rally around it without having to push it on them.

    I'm sorry if my comments seem antagonistic. But I very strongly consider standards-setting, as opposed to standards-classification, to be a tool that ultimately stifles creativity, rather than enhancing it. As a consequence, I think the onus should be on the compiler writers to be as accommodating as possible to the freedom of the programmers who use their tools, rather than imposing straight-jackets on them. We Perl programmers have a mantra: TIMTOWTDI. It stands for, "There is more than one way to do it," and is a celebration of the diverse options that we enjoy. The success of any ecosystem, whether natural or artificial, depends on diversity. Acknowledging and fostering diversity has worked amazingly well for Perl. It could continue to work here too -- if you let it.

    -Phil
  • kwinnkwinn Posts: 8,697
    edited 2011-12-09 17:16
    I must confess to having mixed feelings on this topic. On the one hand I agree with Phil up to a point. I would not like to see rigid standards that stifle creativity. On the other hand I am generally in favor of standards and find them useful as guidelines or a framework for writing an application.

    Perhaps each person who has a standard to propose could start a thread of their own with a more detailed description and some examples of the standard in use so the forum can look them over and possibly make some suggestions for improvements.

    I would certainly be delighted to find a useful standard. Even having several standards would be better than having none at all. And if the standards are not to your liking you can always create your own.
  • RossHRossH Posts: 5,512
    edited 2011-12-09 22:08
    All,

    Just an update. I've been re-reading this thread to try and summarize the various proposals, and there is more agreement here than it may at first seem. There is general (but by no means unanimous) agreement that a minimalist, pragmatic, low-overhead and flexible standard for communicating with cogs would be a good idea. Some people who posted originally have apparently been scared off by a few of the more ummm ... vehement ... posts - but in most cases it is clear what their general view is.

    So I think I agree with 4x5n - this thread has achieved its main objective, and it is time to move to the next step of discussion in a new thread. That new thread will aim to identify what elements such a standard should incorporate, and I will kick it off once I can put all the ideas raised in this thread into a sensible overall structure. I'll try and incorporate everyone's suggestions - if I miss some it is entirely accidental, and you are free to add them back in again.

    I don't think the new thread will come up with a final standard yet either - but I hope it will help identify the scope of any such standard, and perhaps allay some of the concerns expressed here that we may somehow "stifle" creativity. I personally believe we may end up doing precisely the opposite. I can't prove that of course - but there are certainly enough real-world examples that point to this as a likely outcome.

    I'm also now thinking (again in response to the concerns expressed here) that any standard should not be an "all or nothing" affair - instead, it will consist of a layered set of techniques that - either used individually or all together - provide a rich infrastructure, parts of which are applicable to any program that needs to span multiple cogs.

    For those who are worried about being "forced" to adopt a standard - this is not, and never has been the intent of this thread. Having a standard does not mean everyone is required to comply with it, and does not preclude having other competing or conflicting standards. Someone brought up internet standards earlier in the thread - it would perhaps be instructive to look at how many competing internet standards there are for secure file transfers. Or how many competing standards there are for virtual private networks.

    The reality is that you choose the standard that best suits your needs. In this case, it just means that if two or more cog programs do comply with the standard, then they know they will be able to inter-operate and communicate without any further development work being required.

    If your aim is to write cog programs guaranteed to be able to inter-operate, and guaranteed to able to be used from different language environments (at least those which also adopt some level of the standard), then it is likely you will need to adopt pretty much all the components of the standard.

    However, if you are simply writing a set of cog programs for your own purposes, then you can treat it as a a grab-bag of techniques that you can pick or choose as you need - knowing that they will all work together, and knowing that if you later decide to add some new capability you did not originally anticipate, there is a well-defined pathway you can use to do so. Also, if someone else wants to use your cog program, or you find a cog program that does what you need elsehwere - and often there are a multitude to choose from - then figuring out how much work is required to integrate each alternative becomes very easy, and this may guide your choice.

    I would encourage those who wish to continue to discuss whether such a standard is a good idea (or not!) to continue this discussion in this thread. I think some good and interesting - and sometimes surprising - views have been expressed.

    Ross.
  • Heater.Heater. Posts: 21,230
    edited 2011-12-09 22:47
    An important feature of the internet standards is that they are layered. Think network stack. The physical layer, the link layer, the transport layer etc etc.

    At each layer there are a number of standards to chose from but with the intention that they all fit with what is below and what is above. So you can have your socket conection over ethernet or wireless or serial cable. And so on.

    What this means is that you don't have adopt a whole massive fixed standard all the time, or nothing.

    So. In the case under discussion we see, for example, that I and others are fixated on the low level aspects of getting a COG started and useable easily from all languages. This aspect perhaps only calls for the 5 rules outlined by Jazzed plus maybe Dr A's idea of a delayed start to COG operation for use with boot loaders.

    Moving up "the stack" we see that RossH would like a more high level spec of COG to COG communication.

    Further up others want to take this to a standard for Prop to Prop communication.

    With such multiple layerd standards or conventions it would be possible for people to adopt as much of it as they feel is good for them and ignore any work or overheads not suitable for them.

    This layering approach may even result in some of it being adopted:)
  • potatoheadpotatohead Posts: 10,261
    edited 2011-12-09 23:08
    Exactly what I was thinking, and wrote actually. Navigated away from the page, so now it's gonna be shorter.

    IMHO, a cog that does the 6 "good things" should be modeled, then optimized as a starter template for what is to come. For clarity then, these things are:
    1. COG drivers should be self contained where possible.
    2. If extra data is needed for the cog, it should be passed as parameter(s).
    3. All external control data should be passed as parameter(s).
    4. The COG driver API must be described so that any interface code can use it.
    5. *On COG start, PAR points to address that contains 0. COG waits until non zero value is placed there, which it then uses as it's address for HUB memory comms block.
    6. *COG should write zero back to *PAR value when ready for commands (non-zero before cognew).
    Implied in the above: Don't modify COG image prior to COGNEW.

    * I think this order makes more sense.


    Is that a fair summary product of this thread? I'm considering a modification to a COG program or two that is close on these things so that it complies.

    The way I see it, that can be contributed here, and the code whittled down to the mere nubs, formatted well, and boiled down to a start template anyone COULD use to then build a COG program of their own. I'm a big believer in "start" or "seed" or "template" entities, because it's easier to just drill right down to the matter at hand when they are available and easy to use. From there, they are either adopted, fall into disuse, or are modified. Where we've had these, there has been greater re-use, and I'm citing existing objects that were used to build new ones, adopting the same initial constructs.

    That then, is a layer. One layer, of X layers...

    Edit: FWIW, should this summary prove sufficient, the thread was worth it, if anything to just have that put down in a concise way. Most of the above isn't new, but it's also tribal knowledge, distributed among many discussions and the better objects serving as examples.
  • RossHRossH Posts: 5,512
    edited 2011-12-10 00:04
    potatohead wrote: »
    Exactly what I was thinking, and wrote actually. Navigated away from the page, so now it's gonna be shorter.

    IMHO, a cog that does the 6 "good things" should be modeled, then optimized as a starter template for what is to come. For clarity then, these things are:

    ...

    Is that a fair summary product of this thread?

    ...

    That then, is a layer. One layer, of X layers...

    Edit: FWIW, should this summary prove sufficient, the thread was worth it, if anything to just have that put down in a concise way. Most of the above isn't new, but it's also tribal knowledge, distributed among many discussions and the better objects serving as examples.

    Generally, I agree, potatohead. However, I would classify the six points as "foundation principles" or "recommended practice" rather than representing a layer of a possible standard - by themselves they do not actually go any way towards allowing two cogs to communicate - but I agree that they represent knowledge worth preserving.

    But this is not all that has come out of this thread - there is stuff here that potentially does contribute to a standard. Some very concrete suggestions have been made. The problem is that it is currently a bit all over the shop - but if we adopt a layered model, it starts to make sense - the main problem is that the ideas presented here - while potentially useful - belong to different layers. This makes them hard to compare and evaluate. But once they are organized correctly, I think the whole thing will start to make sense, and we can then begin to identify the missing parts.

    This is going to be more work than I thought - but I think it will be worthwhile in the end!

    Ross.
  • Heater.Heater. Posts: 21,230
    edited 2011-12-10 00:24
    What?!!
    If every single object in OBEX adhered to those six points I might bet that the PASM parts of all them would be useable without change by Catalina, GCC, Zog and many other languages/compilers for the Prop. All that functionality buried away there in Spin wrappers would be available for use.
    I'd say that makes the six points a very valuable and concrete standard that enables communication with COGs in from any language. Further that all Parallax "Gold Standard" objects should conform to that standard.

    I know it does not go as far as you would like in terms of specifying anything about data layouts or protocols or services etc. But it is a fundamental starting point at the lowest level.
  • RossHRossH Posts: 5,512
    edited 2011-12-10 01:46
    Heater. wrote: »
    What?!!
    If every single object in OBEX adhered to those six points I might bet that the PASM parts of all them would be useable without change by Catalina, GCC, Zog and many other languages/compilers for the Prop. All that functionality buried away there in Spin wrappers would be available for use.
    I'd say that makes the six points a very valuable and concrete standard that enables communication with COGs in from any language. Further that all Parallax "Gold Standard" objects should conform to that standard.

    I know it does not go as far as you would like in terms of specifying anything about data layouts or protocols or services etc. But it is a fundamental starting point at the lowest level.

    "What?!!", indeed! These six points don't even address cog-to-cog communication.

    I sometimes think we must be talking a different language. Let's look at those six points in a bit more detail:

    1. COG drivers should be self contained where possible.

    This appears to be a motherhood statement. To be honest, I'm not sure what it means. Each cog program must fit in a cog (where possible) and not use Hub RAM if it can use cog RAM? Is that what it means? Why not, for goodness sake!

    2. If extra data is needed for the cog, it should be passed as parameter(s).

    All this really seems to say is that we should use the par parameter to pass all information - which can be used only once, on initialization - so the par parameter must either itself encode all required information, or point to a place that does. This rule restricts the mechanism (e.g. presumably we can't modify the code before it is loaded as a means of passing data to the cog, or use fixed Hub RAM addresses that the cog program knows to look in - both very common and useful techniques!) but it does not specify anything about what we can or should do.

    3. All external control data should be passed as parameter(s).

    Seems to be the same point as 2. Again, this rule restricts, but does not specify.
    4. The COG driver API must be described so that any interface code can use it.
    This tautological statement collapses to "the cog driver must have an API". Ok - good! Have we learned anything yet?
    5. *On COG start, PAR points to address that contains 0. COG waits until non zero value is placed there, which it then uses as it's address for HUB memory comms block.

    Okay! - some meat at last! I can agree to this! Now how big is that comms block? What's it's structure? How do I use it? How do I actually communicate with the cog program I just started?... Hello? Hello? Is anyone there?

    6. *COG should write zero back to *PAR value when ready for commands (non-zero before cognew).


    Okay! - now w'ere really going places! The cog is now ready for commands - fantastic! How do I send it one? How do I know when it's finished it? Hello? ... Hello? ... Operator, we seem to have been disconnected!
    All these points might represent very useful programming tips - but how do they help anyone actually communicate with a cog program ... from even just one language? At best, they get me to the point of initializing the cog program ... but then (right when we get to the actual subject of this thread) they just leave me there - with apparently nothing further to say!

    Ross.
  • Heater.Heater. Posts: 21,230
    edited 2011-12-10 03:56
    In order to communicate with a COG from another COG from any language you have to at least be able to get the thing started.

    The way that is done for PASM code running COGs in a lot of existing objects makes it hard to reuse the PASM from other languages without a lot of study and rework of the code.

    This "layer" of specification addresses that simple reuse requirement.

    Admittedly the 6 points could be phrased more rigorously and perhaps there are not even 6 points when you have boiled it down but I'm sure you get the idea.

    Correct, this "layer" does not say anything about how communication proceeds when up and running. Nothing about command/response longs or shared buffers, or service discovery etc. And it is not intended to. Those are topics for higher layer specs.

    Honestly, if we can't get this simple idea to fly in the Propeller user space then there is no hope for anything at a higer level of abstraction.
  • ersmithersmith Posts: 6,097
    edited 2011-12-10 03:58
    RossH wrote: »
    All these points might represent very useful programming tips - but how do they help anyone actually communicate with a cog program ... from even just one language? At best, they get me to the point of initializing the cog program ... but then (right when we get to the actual subject of this thread) they just leave me there - with apparently nothing further to say!

    Communicating the initial parameters is a very important part of inter-cog communication. Indeed, for some drivers no further communication is necessary once the initial parameters have been set up (e.g. a video driver with a static frame buffer). Don't discount this part of communication.

    I think there are (at least) two models for how cogs are set up and work together:

    1.) A loader program initializes a bunch of services in various cogs, and then loads one or more programs which then discover the services and communicate with the cogs.
    2.) A loader program loads a single main program, which then starts up the cog services it wants.

    I believe you were thinking of model 1 when you started this thread, but model 2 is probably more commonly used.

    Eric
  • Heater.Heater. Posts: 21,230
    edited 2011-12-10 04:04
    Eric,
    I see what you mean, but isn't writing to a video frame buffer "communicating" with the video COG?

    Edit: Scratched last sentence of gibberish.
  • RossHRossH Posts: 5,512
    edited 2011-12-10 04:55
    Eric, Heater ...

    I accept that in both model 1 or model 2, the first thing we need to do is start and initialize the various cog programs (which may be the same step or not, depending on the mechanism employed). I also accept that the "6 points" are the beginnings of a useful set of "recommended practices" for how to start cogs in a way that can be done from any language.

    But I guess I don't really see this as being that much of an issue. Especially since the more significant aspects of the initialization of most cog programs (i,e. how to specify the hardware-specific resources that the cog might need - such as pins and memory) are not addressed by the "6 points" at all.

    I realize that I am massively oversimplifying here, but starting the cog is the easy bit. Communicating with the cog (once started) is where things start to get complex - and is where most of the re-work is required to enable a cog program to be used in a different language environment.

    In Catalina, I use many methods for starting and initlializing cogs - including some that would seem to be ruled out by the "6 points". I'm sure I'm not alone in this - the cog programs that we all re-use (usually extracted from the OBEX) themselves use many methods , some of which violate the 6 points - and so I absolutely do agree that there is room for standardization here.

    But I still maintain that while starting or initializing cog programs is related to the topic of this thread, it is not actually the topic of this thread. I am happy to accept the "6 points" as a good set of recommended practices for how to start cog programs, and move on. I would also be happy to see a similar set of recommended practices for how to initialize cog programs.

    Putting it another way - in terms of heater's layering model, this stuff is "layer 0". But I don't see any problems in layer 0, and had hoped to be discussing layer 1 and perhaps layer 2 in this thread.

    I'll try and make this distinction clearer in the next thread.

    Ross.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-12-10 08:44
    Hi Ross,

    Yes, I agree with you. The summary I posted doesn't get us very far toward your goal. A goal that I struggle to understand at this particular stage. No worries though. I'm quite happy to follow along with interest, simply because we get a useful chunk out of it sometimes. We did this time, and that's the summary I did.

    I submit that we won't ever get to your goal, without first realizing some basic understanding on cog code best practices.

    A few comments:

    1. Self contained, as motherhood! (funny as hell, BTW)

    Sometimes a COG program requires 'magic numbers", or fairly significant computation / initalization prior to use. If the COG program is to just be some binary we can fetch 'n use, the less of that we do, the easier it is to just use the COG. A great example of this is early video drivers. I and others did all manner of Smile in SPIN, only to launch a video COG that then offered up some display on the TV. Using those COGS isn't going to be easy for anybody in any language, EVEN SPIN. I am quite sure that is what number 1 means. Really, the goal is to supply the required info, and no more, the cog then adjusting to various clock speeds, pin configurations, etc...

    As people struggled with various video creations, better more self contained COG code was the result. #1 is just an expression of that goal. Won't always be able to get there, but it is a fine goal, or practice, IMHO.

    It is a foundation requirement, as are the others, if we are to reach the standard comms between COGS.

    On #4, perhaps a restatement would help some.

    The function of the COG, it's requirements, modes, etc... need to be described well enough for connecting code to be written without empirical testing and or reverse engineering the COG code itself. ie: That magic number vs input your clock speed divided by 32, kind of thing. The other part of it is whether or not the COG needs constant attention from the higher level language. For best re-use, let's avoid that. A great example would be a video mode change. Eric Ball wrote the bits needed to compute this IN THE COG. Where before, we would compute in spin, stuffing the product of that into a COG comms block. Eliminating that dependency significantly improves the prospect of reuse.

    ***Any thoughts on #4 would be appreciated from anybody. I am going to drop this into my blog, and incorporate it into PASM writings I have in progress, because I see the potential for more COG device code or COG service code reused more times with that info out there and available than not. And simple reuse is a foundation for comms. All good, from where I stand.

    Ross, if it helps, think of the standard networking model. It's got various layers. Each of those has restrictions that make the higher level layers possible. Consider this the base layer, or layer 0.

    Your basic comms specification would then operate on top of that, ASSUMING layer 0 compliance. Heater nailed it. We have to be able to do this, if there will even BE a layer 1, IMHO.

    From my perspective, it was extremely useful to come to this realization, having been largely confused through this thread. Now, I don't know where the comms layer will go, but I do know I can produce a COG today that could eventually get there, right? Convergence, not divergence.

    In any case, I did appreciate the comment Phil made about standards and the part of them that come from looking back at what was productive and what was not, educating people in advance of them starting down paths that diverge, encouraging paths that converge.

    That is why I did the summary. IMHO, as a set of guidelines for producing reusable COGS, that body of information is extremely useful. Anybody going down that road is highly likely to produce code that converges with both your desire to establish some higher level comms, and or other people's projects and code where said COG may be reused. That's great stuff!!

    For what it's worth Ross, clarifying what is layer 0, and what is not, is actually significant progress toward your goal, as you now have layer 0 compartmentalized in a way that people can operate with more easily, allowing you to build on layer 1. Progress!

    ***I really like this being brought to light:
    I think there are (at least) two models for how cogs are set up and work together:

    1.) A loader program initializes a bunch of services in various cogs, and then loads one or more programs which then discover the services and communicate with the cogs.
    2.) A loader program loads a single main program, which then starts up the cog services it wants.

    IMHO, that suggests two more layers, 1 and 2, where 1 2ould be the simpler model #2 above, the most common case. You are targeting layer 2, which isn't such a common case. Why? Because we've not yet really clarified and compartmentalized layers 1 and 0!

    When I first read this thread, I had not internalized those things. Now that I have done so in a fashion, I can better see where this could go. Again, the thread was productive on that basis, and I am quite sure I am not alone in that realization.

    Perhaps some modification to this layer will make sense as the higher level stuff happens. That's ok.

    [qoes back to reading with great interest, having captured a nice 'nugget' to work with :) ]
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-12-10 12:08
    At 496 longs, cog RAM is a very dear commodity. I can't point to a particular example, but it's not hard to imagine a PASM cog so packed to the rafters with essential code, that the only way to initialize its parameters is to poke them into its hub space before a cognew. This is one example that illustrates the compiler writer's responsibility to make life easier for programmers and enhance their productivity, rather than vice-versa. I can well understand the temptation to establish programming standards that make a compiler writer's job easier -- especially if one is a compiler writer. :) But, in the end, what programmers really want is accommodation for the practices they already use to advantage. In the case I cited, what would be wrong with a C program invoking the Spin byte-code interpreter to handle a Spin start method? Assuming the start method is the first public method in the object, I believe this could be done without needing a symbol table or any relocation fix-ups. (I might be wrong about that, since I haven't paid much attention to Spin's object code structure lately.)

    I guess what I'm getting at is that the truly hard work should be done in the compiler so that programmers are free to use whatever techniques are suitable for the job at hand. And that includes accommodating boundary cases that entail programming practices considered less than ideal -- that is, until you really need them.

    -Phil
  • RossHRossH Posts: 5,512
    edited 2011-12-10 14:52
    potatohead & Phil ...

    I find I actually agree with both of you, since your positions are not incompatible. Yes, heater "nailed it" when he suggested the analogy with the network layering model - but one of the most essential aspects of the layering model is the independence of the layers.

    I don't want to push this analogy too far (especially as TCP/IP has only 4 layers compared to the ISO 7 layer model - so the layers do not align exactly) but in networking terms, the lowest layer akways encompasses the physical layer. And this layer is by far the most diverse and difficult to standardize. Someone mentioned the internet RFC process earlier in this thread - and if you know much about that process, you are probably aware of RFC 1149 which uses carrier pigeons as the physical layer of a workable implementation of TCP/IP.

    The point of layering in the first place is that layer 1 should be able to be implemented on any implementation of layer 0, provided the services offered by layer 0 are well defined.

    In the latter part of this thread we have been largely arguing about details that are mostly intenal to layer 0, and ultimately will have little or no impact on layer 1. So while I agree with potatohead that in some sense the whole structure will rest on the foundation decisions made about layer 0, in another sense I don't care much what those decisions are!

    Like Phil, I don't think we need to limit the techniques that can be used in layer 0 unless they impact on the higher layers (and some of them surely do - for example, relying on things that can only easily be done in Spin).

    Further, I think the internal arguments we may have about layer 0 should not stop us speculating about the services it offers to layer 1, or the services offered by layer 1 - and so on up the hierarchy. Once we have an appropriate structure in place and have identified what services belong in what layers (not to mention exactly how many layers there might be!) then I think the whole thing will begin to crystallize for everyone.

    That's going to be the point of the next thread - which I hope I will get time to set up today.

    To put it the context of RFC 1149 - would you rather we started flying, or would you rather we sat here arguing about who should clean up the birdsh*t in the bottom of the cage? :)

    Ross.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-12-10 15:30
    From a practical standpoint, I believe it's always better to demonstrate an idea with actual code than to invite commentary before things are fleshed out. While the latter may seem more democratic on the surface, the former strikes me as more efficient. If anything, this thread is testimony to that. Once something is demonstrated, it can be debated in concrete terms, rather than via passionate (and sometime vitriolic) discussion. One good working demo is worth a thousand hypotheticals.

    A perfect example of this process, which worked well, was Bill Henning's announcement of the LMM paradigm. He had something to show us right off, without inviting preliminary commentary or contributions. As a result, we all had something concrete at our fingertips to try, modify, and make suggestions about. But you've got to plant the seed first, rather than just showing people pictures of a seed. At the very least, a concrete example will eliminate any questions regarding what the discussion is about.

    In this case, I think that taking an object from the Propeller Library or OBEX, pointing out its faults vis-
  • Mike GreenMike Green Posts: 23,101
    edited 2011-12-10 15:45
    I'll take a stab at some low-hanging fruit ... stream devices. These can be unidirectional like a keyboard (input) or a display (output) or a half-duplex serial channel (either direction). These can be bidirectional like a serial channel. In all of these cases, there may be additional info other than single characters. For a keyboard, there may be shift codes, scan codes, etc. For a display, there may be a pixel location or color information. These can be encoded in the byte stream or given as additional information in a 32-bit or 64-bit structure used for the stream. For a bidirectional device, there would have to be two streams. To be able to use a bidirectional for unidirectional communications, you'd have to have an order to the stream structures and, if you used a unidirectional device, there'd have to be a dummy stream structure as a placeholder for the unsupplied stream. In addition to two stream structures, there would have to be some device specific information. Some might be supplied by the device driver somehow, like the size of the parameter area, a name or code for the specific device. Some would be supplied during initialization like a Baud or display dimensions. I'll stop there ... any thoughts?
  • RossHRossH Posts: 5,512
    edited 2011-12-10 16:42
    Mike Green wrote: »
    I'll take a stab at some low-hanging fruit ... stream devices. ... any thoughts?

    Hi Mike,

    Stream devices are a common class of devices - but this is largely an O/S level abstraction. One of the great successes of Unix (and C) was to show how many of the necessary interactions between a program and its environment can be simply and effectively modelled as one or more streams.

    But Unix possibly went too far in that direction, and modelled things as streams that could have been better handled in other ways. C inherited streams (or rather had to support them) because C grew up in Unix environment. We persevere with streams because they are so suitable for simple character-oriented devices (e.g. terminals) - but they turned out to be really unsuitable in other cases (e.g. GUIs). And for both file systems and networking, streams are conceptually simple but actually they are very complex to manage due to the need for out-of-band signalling in both cases - streams don't do this well.

    Also, once you get below the abstract O/S level, almost no physical device is actually a stream - most physical devices deal with data in messy chunks which require massaging to turn them into a stream. Physical devices also have lots of state dependent behaviour and error conditions where a stream interface may be unsuitable.

    On the Propeller (and in this thread) we are definitely down at that physical level here, rather than at the abstract level. Or, put another way - streams are a language level abstraction, and here we want something language-independent. I think a better model is to think of devices as offering a set of services. Those services may be as simple as a "get byte" and "put byte" (which would be enough to satisfy most stream oriented devices), or they may be more complex such as "goto screen location <x,y>", "set color to <rgb>" etc.

    Not sure yet how this all hangs together - but I think what I am saying is that one of the possible mechanisms we support should be specifically optimized toward implementing services for devices that will be modelled at a higher level as a stream - but other mechanisms (like shared memory) will probably also need to be supported.

    Ross.
  • ersmithersmith Posts: 6,097
    edited 2011-12-10 17:30
    Mike Green wrote: »
    I'll take a stab at some low-hanging fruit ... stream devices. These can be unidirectional like a keyboard (input) or a display (output) or a half-duplex serial channel (either direction). These can be bidirectional like a serial channel. In all of these cases, there may be additional info other than single characters. For a keyboard, there may be shift codes, scan codes, etc. For a display, there may be a pixel location or color information. These can be encoded in the byte stream or given as additional information in a 32-bit or 64-bit structure used for the stream. For a bidirectional device, there would have to be two streams. To be able to use a bidirectional for unidirectional communications, you'd have to have an order to the stream structures and, if you used a unidirectional device, there'd have to be a dummy stream structure as a placeholder for the unsupplied stream. In addition to two stream structures, there would have to be some device specific information. Some might be supplied by the device driver somehow, like the size of the parameter area, a name or code for the specific device. Some would be supplied during initialization like a Baud or display dimensions. I'll stop there ... any thoughts?

    Actually that's a great model. Any device could be communicated with as a stream, namely a stream of input requests and a stream of output results. So as an abstraction it works great. Implementing it efficiently in a COG may be an issue; managing queues of input requests and output results is a fair bit of overhead :-(. But perhaps we can special case simple synchronous drivers with a "trivial" queue (1 entry) in some way.

    Eric
  • Cluso99Cluso99 Posts: 18,069
    edited 2011-12-10 17:35
    [OT] AFAIK LMM was proposed years before it was actually used with any real purpose. This was Bill's foresight into problems we would ultimately face with the prop architecture. Now it has proved to be a fantastic tool in performing all kinds of code. [/OT]

    We have at least now identified in some respects where Ross is headed (or perhaps more correctly, not headed).

    I think I (and perhaps a lot of others) do not fully understand where Ross is headed, because I have not written a high level compiler. I do however see the need for this, particularly because I am sure we all would like as many languages on the prop to be able to fully co-exist. At the least, I see pasm and spin should be able to co-exist with any other high level language because most objects have spin and pasm.

    Anyway, I would like to pose a few suggestions with the "level 0" interface (for pasm objects)...
    1. Where possible, an object should contain all interface code within the cog.
      1. Routines typically in a helper wrapper e.g. like hex, dec, string, etc
      2. Initialising variables e.g. copying using par
    2. Where NOT possible, an object should use
      1. A new method to be defined to "implant" values into the cog code while still resident in hub
        1. I think we understand enough to be able to define a standard method to do this
      2. Use a standard set of "helper routines"
        1. We need an object to perform a set of standard routines
          1. e.g. to breakdown the functions hex, dec, string, bin, etc
          2. This object could then be recoded in each of the languages
    For item 2.1 we could use a separate DAT space...
    • To contain a table of offsets where the variables needs to be implanted in the cog code.
    • Perhaps this table needs a set of values of AND and OR so that the variables implanted may only be a subset of bits???
    • We could use LMM to load these variables under control of the cog when it starts
  • ersmithersmith Posts: 6,097
    edited 2011-12-10 17:40
    RossH wrote: »
    Stream devices are a common class of devices - but this is largely an O/S level abstraction. One of the great successes of Unix (and C) was to show how many of the necessary interactions between a program and its environment can be simply and effectively modelled as one or more streams.

    But Unix possibly went too far in that direction, and modelled things as streams that could have been better handled in other ways. C inherited streams (or rather had to support them) because C grew up in Unix environment. We persevere with streams because they are so suitable for simple character-oriented devices (e.g. terminals) - but they turned out to be really unsuitable in other cases (e.g. GUIs).

    Actually the Plan 9 OS (and some other GUI systems, like Bellcore's MGR http://en.wikipedia.org/wiki/ManaGeR) showed that the stream model works fine for GUIs. I think Rob Pike and some of the other Unix architects would argue that Unix didn't go far enough in the stream paradigm, which is why they developed Plan 9.

    All of which is moot for the Propeller, because I don't think anyone is seriously proposing that all devices be controlled by streams of ASCII characters. But at heart what we're going to have is a stream of input requests and a stream of output results. The requests will often contain pointers to shared memory, so we wouldn't be talking about a "pure" stream abstraction for all devices, but given simple and fast queue functions a stream of requests and results would be a great way to communicate with a Cog.

    Eric
  • ersmithersmith Posts: 6,097
    edited 2011-12-10 17:53
    Cluso99 wrote: »
    Anyway, I would like to pose a few suggestions with the "level 0" interface (for pasm objects)...
    1. Where possible, an object should contain all interface code within the cog.
      1. Routines typically in a helper wrapper e.g. like hex, dec, string, etc
      2. Initialising variables e.g. copying using par
    2. Where NOT possible, an object should use
      1. A new method to be defined to "implant" values into the cog code while still resident in hub
        1. I think we understand enough to be able to define a standard method to do this
    I think there's a problem with 1.1. Wrapper functions like "hex", "dec", and "string" are redundant... we end up with multiple copies of code, since all of the "hex" and "dec" functions for various drivers end up looking the same (convert a number to a string and then call the "string" method). In Spin there are no high level printing functions, but in other languages there are often standard ways to do printing, like C's printf so the high level languages will most likely want to use those.

    For 2.1, I think someone had a suggestion that all the "implantable" parameters should be at the start of the object, right after a jmp. That makes it easy to find them (they're always at offset 4) so I think that's the way to go.

    Eric
  • Mike GreenMike Green Posts: 23,101
    edited 2011-12-10 18:20
    For some types of devices like keyboards, simple displays, and asynchronous serial devices, streams are a good fit to device function. That's how our current drivers work using simple circular buffers for the keyboard and serial drivers. Sure, that's how these are handled in O/Ss, but it's a good match to the devices.
  • RossHRossH Posts: 5,512
    edited 2011-12-10 18:32
    ersmith wrote: »
    Actually the Plan 9 OS (and some other GUI systems, like Bellcore's MGR http://en.wikipedia.org/wiki/ManaGeR) showed that the stream model works fine for GUIs. I think Rob Pike and some of the other Unix architects would argue that Unix didn't go far enough in the stream paradigm, which is why they developed Plan 9.
    Yes, fair point - I believe Inferno goes even further down this path. But it's hard to know whether this approach is actually a "good thing", since so few instances of these operating systems are ever seen outside the labs that create them. In any case, this all just tends to supports my point that such abstract streams are primarily an O/S level construct that may be overlaid on a lower level.

    On the Propeller we generally have no O/S (and no need for one) but some languages may choose to provide sophisticated streams support - C++ for instance. C is in the middle ground here. Other languages like BASIC have no concept of streams at all (not sure about Forth).
    ersmith wrote: »
    All of which is moot for the Propeller, because I don't think anyone is seriously proposing that all devices be controlled by streams of ASCII characters. But at heart what we're going to have is a stream of input requests and a stream of output results. The requests will often contain pointers to shared memory, so we wouldn't be talking about a "pure" stream abstraction for all devices, but given simple and fast queue functions a stream of requests and results would be a great way to communicate with a Cog.
    Eric

    Not exactly sure I know what you mean here Eric - do you mean that as well as a synchronous command/response mechanism, and a simple asynchronous "block of shared memory" mechanism, that we should also support a queue mechanism (as used in some of the existing serial and keyboard drivers)?

    If so, then I agree. All these are useful communications mechanisms, and to rule any of them out would be a mistake. I guess the real question is - are there any more fundamental mechanisms we need to support?

    Ross.
  • RossHRossH Posts: 5,512
    edited 2011-12-10 18:41
    ersmith wrote: »
    I think there's a problem with 1.1. Wrapper functions like "hex", "dec", and "string" are redundant... we end up with multiple copies of code, since all of the "hex" and "dec" functions for various drivers end up looking the same (convert a number to a string and then call the "string" method). In Spin there are no high level printing functions, but in other languages there are often standard ways to do printing, like C's printf so the high level languages will most likely want to use those.
    Not sure I agree with this - we tend to think about cog space being the most valuable, but sometimes (especially when using high level languages) you find you have have a heap of unused cog space, but hub space is at a premium. I move code into the cogs wherever possible.
    ersmith wrote: »
    For 2.1, I think someone had a suggestion that all the "implantable" parameters should be at the start of the object, right after a jmp. That makes it easy to find them (they're always at offset 4) so I think that's the way to go.
    Eric

    Yes, I've got this mechanism in the summary of this thread I'm preparing. I also think it is a good mechanism, and the necessary overhead (one instruction for the initial jmp instruction) seems acceptable.
  • RossHRossH Posts: 5,512
    edited 2011-12-10 18:45
    Mike Green wrote: »
    For some types of devices like keyboards, simple displays, and asynchronous serial devices, streams are a good fit to device function. That's how our current drivers work using simple circular buffers for the keyboard and serial drivers. Sure, that's how these are handled in O/Ss, but it's a good match to the devices.

    Yes, I agree - we should support a mechanism intended to simplify implementing character-oriented devices, since they are so common.

    I'm thinking this would be only one of several alternative mechanisms, with the other mechanisms intended to support other device types.

    Ross.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-12-10 19:04
    ersmith wrote:
    I think there's a problem with 1.1. Wrapper functions like "hex", "dec", and "string" are redundant...
    I definitely agree with this (except in the case of str methods, which are simple and belong in each I/O object *). Objects like Chip's SimpleNumbers and my SimpleNumbersPlus deal with these transformations for all I/O objects. Such transformations don't need to be replicated for each I/O instance.

    -Phil

    * If Spin had method pointers, I would also exclude str methods; but, without such pointers, these methods should be associated with each I/O object.
  • 4x5n4x5n Posts: 745
    edited 2011-12-10 20:08
    kwinn wrote: »
    I must confess to having mixed feelings on this topic. On the one hand I agree with Phil up to a point. I would not like to see rigid standards that stifle creativity. On the other hand I am generally in favor of standards and find them useful as guidelines or a framework for writing an application.

    Perhaps each person who has a standard to propose could start a thread of their own with a more detailed description and some examples of the standard in use so the forum can look them over and possibly make some suggestions for improvements.

    I would certainly be delighted to find a useful standard. Even having several standards would be better than having none at all. And if the standards are not to your liking you can always create your own.

    I see the standards being discussed here as being more for the compilers to follow and to a lesser extent objects submitted to the obex. I've already suggested that it's time to start a discussion on what the standards should be. I also think that it's possible to have multiple standards that are more or less compatible. Something to keep in mind is that as far as I know no one has suggested that these standards be enforced in hardware or the "spin tool". Meaning that if you want to write all of your own code you could do so anyway you want.
Sign In or Register to comment.