Shop OBEX P1 Docs P2 Docs Learn Events
My attempt at a "Gold Standard" process - Page 2 — Parallax Forums

My attempt at a "Gold Standard" process

245678

Comments

  • prof_brainoprof_braino Posts: 4,313
    edited 2012-12-28 06:44
    jazzed wrote: »
    ...you and I agree on those points.

    This is a good sign. As a primary contributor, your input and agreement is critical to the success of the effort.
    BTW, most of that is in the Parallax Gold Standard document, although it is not explicitly stated.

    And that is exactly the issue. If we have to "read in between the lines" or guess or interpret, then each individual will all get a different take. And folks have explicitly told me that these items (description of the function, description of the interfaces, some type of tests, and peer review that says the previous three are sufficient) were NOT part of the standard, in their interpretation. (Therefore the "standard" is in error). We want to make it easy for folks to agree with you, rather than easy to go off on a different direction and think they disagree with you.

    The process is NOT to tell us to do something we wouldn't normally do. The process is to get us to agree on the set of activities that ARE what we normally do when we are doing our best (and under what conditions); and to agree that we will do these activities, and somehow show that we did them.

    In essence, we want to determine what behaviors the best engineers perform when they have the greatest success, distill those into a single list, and teach everyone what these are, why we do them, and how.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-12-28 09:49
    SRLM wrote: »
    Personally, I would never review a piece of code in the depth that it required unless I needed to use it. In which case, I dive down into the internals and modify to suit. I've always imagined that the "Gold Obex" would have to work the same way: the only people who will do more than a superficial review are those who need to use it in a project. This naturally leads to the conclusion that you will rarely (never?) have a group of volunteers actively working on a single object to bring it to Gold Status.
    This driver has been vetted (in it's spin/pasm form) already, and the assembly is the same. I only converted the spin to C. That's mostly what I'm looking to have reviewed.
    SRLM wrote: »
    And on that note, my superficial comments about the FFDS1.c/h files are:
    1) I think the dat section should be volatile. Also, with spin2cpp you can use --dat with --gas to make a separate .s file (apparently: I've never tried it, although I will tomorrow).
    2) Should line 25 of FFDS1.c really be static? That would imply to me that you can only have one of the objects, which seems like a problem in some situations (I'm not at all sure of this, though).
    3) Code fluf, but it's inconsistent with the placement of "{": sometimes it's on the end of a line, sometimes by itself.
    4) If you don't use the specific memory types (int32_t, etc) then you don't need to #include <stdint.h>.
    SRLM, Can you sign up for Gerrit, and add the comments there? I understand what you're saying, but they'll be easier to fix with them inline, and you can probably put more detail with fewer words that way. Also, each line of code can have, basically, its own discussion if necessary.

    Thanks for the commentary though. I'm hoping to get, at least, the form of this submission pristine by community standards, so it can be used as an example to define a coding standard.
  • jazzedjazzed Posts: 11,803
    edited 2012-12-28 11:09
    The registration process bothers me. It's more than I'm interested in committing to doing.
    Why do we need git to do a review? I really don't want download files to do a review.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-12-28 11:59
    You don't need to download the files to review them - you can do so via the web interface. Click on each file to view the diff (in this case, diffing against a non-existant previous version) and double-click a line to add a comment to it. You only need Git to submit new files or changes.

    Gerrit is explicitly as-lightweight-as-possible in the registration department. The Username and Password is farmed out to third parties via OpenID, so no password ever touches my server, and the token Gerrit uses to authenticate you can be revoked at any time by your OpenID provider (at least Google makes it easy). Chances are you already have an OpenID provider you can use, so you shouldn't need to have yet another password to remember.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-12-28 12:18
    By the way, Steve (Jazzed?), you did complete enough of the registration to review on Gerrit. I know it doesn't make that immediately obvious...

    Do you mind if I add you to the "Senior Developers" group? That'll give you permission to absolutely accept/reject code, but it's still up to the Process Admins to submit accepted code to the repository.

    I know it's difficult to read (until you've been through the process once), but here is a diagram of the Gerrit workflow. The "Verifier" can be an automated CI process (such as Jenkins, which I will implement soon). At this point, I think we can have the CI process make sure everything compiles, but I don't have a way to test things yet.
  • jazzedjazzed Posts: 11,803
    edited 2012-12-28 12:53
    By the way, Steve (Jazzed?), you did complete enough of the registration to review on Gerrit. I know it doesn't make that immediately obvious...

    It's obvious; however, I'm a casual user at the moment. Seems to me that casual users should have the option to comment without needing all the infrastructure.

    I do like the flow chart, but the automated commit is new territory for me ....

    Dave Hein's Propeller simulator could be leveraged for testing via tcl/expect for primitive verification in a "test harness". However needing to add special test code to a module may not be very desirable.

    BTW, "jazzed" is the pronunciation of my initials J.S.D.

    --Steve
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-12-28 13:07
    There are two points of registration: One is to tie comments to the same person who may do commits down the road, the second is to avoid spam, like OBEX has been seeing a lot of lately. Same reason you can't comment on the forum without registering, though if Parallax were to implement the forums as an OpenID provider, then you could use your Parallax Forum handle as your Gerrit user.
  • prof_brainoprof_braino Posts: 4,313
    edited 2012-12-28 14:44
    SRLM wrote: »
    This naturally leads to the conclusion that you will rarely (never?) have a group of volunteers actively working on a single object to bring it to Gold Status.

    This is the idea that I've been playing with. "Gold" should mean "it has been thoroughly vetted and reviewed to th point that all participants feel it is complete." And we set the minimum number of reviewers to say five for any peice of code to be considered. If any comment requires a change to be made, the changed code is considered new again.

    Only code that is considered very useful or very interesting would ever receive sufficient review, and thus the repository would be self governing. If five people did review the code, they would suggest changes and check they were made properly. When all five agree that it is sufficient "as-is" needing no further changes, we could accept the code based on our assessment of the capability and expertise of the reviewers.

    Parallax (and us) would only have to evaluate and give final "blessing" to a very small number of "popular" items, and everything else would be in the queue waiting for a spike in popularity.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-12-28 17:06
    By the way, if you want to download code without needing to set up Git, click the "(gitweb)" link next to the patch set, then click on "snapshot" next to the "tree" line at the top.
  • lonesocklonesock Posts: 917
    edited 2012-12-28 20:09
    Random thought: Maybe have 3 stages of certification. When an object is initially accepted (maybe after N nominations?) it is 'Bronze'. Passing basic testing could advance it to 'Silver', and it attains the 'Gold' certification either through V votes, or M months without a bug report or something. I have no idea what actual mechanisms would work best. However, this might be nice so that new users can at least use a broad selection of decent objects right away, without having to wait for whatever certification process is in place. Also, users can gauge how difficult it might be to use a given object in a production setting, knowing they may have to put effort into a Bronze object, or they could wait for a Gold object, etc.

    Jonathan
  • SRLMSRLM Posts: 5,045
    edited 2012-12-28 21:55
    lonesock wrote: »
    Random thought: Maybe have 3 stages of certification. When an object is initially accepted (maybe after N nominations?) it is 'Bronze'. Passing basic testing could advance it to 'Silver', and it attains the 'Gold' certification either through V votes, or M months without a bug report or something. I have no idea what actual mechanisms would work best. However, this might be nice so that new users can at least use a broad selection of decent objects right away, without having to wait for whatever certification process is in place. Also, users can gauge how difficult it might be to use a given object in a production setting, knowing they may have to put effort into a Bronze object, or they could wait for a Gold object, etc.

    Jonathan

    Continuing the thought a bit, you could clarify each level with something like:
    Bronze: N votes for inclusion
    Silver: Meets formatting and basic documentation guidelines (follows defined convention, has example programs, etc.) <- Matches most "good" objects nowdays
    Gold: Includes unit testing and documented requirements

    With this system, a new object could be submitted and, upon passing community voting, go directly to the level it fits best. It doesn't eliminate the "new" or "bad" objects, but it does highlight the "excellent" objects. I like the level system.
  • eiplannereiplanner Posts: 112
    edited 2012-12-28 23:18
    I must have missed a few threads way back somewhere...Why doesn't Parallax just do this with the OBEX? All in all, I think that's what this is; just another OBEX but with higher standards. (which I agree need to be in place) It seems to me that it wouldn't take long for the process to become overwhelming for several people that are already very committed to this forum. It's hard to recruit new users to move into a new process when they have only begun to learn this one. If the majority in the community are asking for something like this to be implemented then why isn't Parallax implementing it? I think that's a shortcoming on their end that needs to be tended to. In the meantime, I think branching out somewhere else isn't such a good idea. It causes one to make a determination of which boards or groups of reviewers they want to submit their work to first for peer review. I feel more comfortable here with these guys when it comes to input and/or assistance. So, unless most of these guys move to that site and leave this one, I think most will continue here.

    Just a novice speaking my 2 and a third cents.
  • SRLMSRLM Posts: 5,045
    edited 2012-12-29 00:21
    eiplanner wrote: »
    I must have missed a few threads way back somewhere...Why doesn't Parallax just do this with the OBEX? All in all, I think that's what this is; just another OBEX but with higher standards. (which I agree need to be in place) It seems to me that it wouldn't take long for the process to become overwhelming for several people that are already very committed to this forum. It's hard to recruit new users to move into a new process when they have only begun to learn this one. If the majority in the community are asking for something like this to be implemented then why isn't Parallax implementing it? I think that's a shortcoming on their end that needs to be tended to. In the meantime, I think branching out somewhere else isn't such a good idea. It causes one to make a determination of which boards or groups of reviewers they want to submit their work to first for peer review. I feel more comfortable here with these guys when it comes to input and/or assistance. So, unless most of these guys move to that site and leave this one, I think most will continue here.

    Just a novice speaking by 2 and a third cents.

    The current OBEX software (the server itself) is outdated and can't be upgraded. We (as a community) would like several features that the OBEX can't provide, such as revision control, ability to attach comments/issues to files/LOC, community management, issue tracking, indexing/searching, and so on. That is the motivation for going with a new OBEX server system.

    The motivation for going with a new process is that the current one does not allow for a good differentiation between "bad" and "good" objects.

    I don't think that Circuitsoft's server is really a fragmentation of the community. Rather, it (and similar setups) are extending the community. It's all the same people, but the Parallax hosted servers don't have the features we (professional software developers) require.

    As for "which board of reviewers", well, that's only until we settle on a single system. At this point it looks a bit like we are in the "may the best system win" free for all.
  • prof_brainoprof_braino Posts: 4,313
    edited 2012-12-29 07:32
    You don't need to download the files to review them - you can do so via the web interface.
    • Click on each file to view the diff (in this case, diffing against a non-existant previous version) and double-click a line to add a comment to it. You only need Git to submit new files or changes.

    Gerrit is explicitly as-lightweight-as-possible in the registration department. The Username and Password is farmed out to third parties via OpenID, so no password ever touches my server, and the token Gerrit uses to authenticate you can be revoked at any time by your OpenID provider (at least Google makes it easy). Chances are you already have an OpenID provider you can use, so you shouldn't need to have yet another password to remember.

    ...difficult to read , Gerrit workflow diagram

    ... to download code without needing to set up Git,
    • click the "(gitweb)" link next to the patch set,
    • then click on "snapshot" next to the "tree" line at the top.

    I would suggest that these instructions start getting collected and marked INSTRUCTIONS. A link to these should be put in the first place a a person sees when they get started.
  • prof_brainoprof_braino Posts: 4,313
    edited 2012-12-29 07:46
    lonesock wrote: »
    Random thought: Maybe have 3 stages of certification. When an object is initially accepted (maybe after N nominations?) it is 'Bronze'. Passing basic testing could advance it to 'Silver', and it attains the 'Gold' certification either through V votes, or M months without a bug report or something. I have no idea what actual mechanisms would work best. However, this might be nice so that new users can at least use a broad selection of decent objects right away, without having to wait for whatever certification process is in place. Also, users can gauge how difficult it might be to use a given object in a production setting, knowing they may have to put effort into a Bronze object, or they could wait for a Gold object, etc.

    Jonathan

    This is a good thought. I would suggest simpler, if possible. (and maybe its not possible, but we should try). Another alternative:

    Stage 1 would be "available". This would mean somebody thinks a piece of code is interesting enough to start working on. The expectation is that the code will need at least some additional work or changes after review.

    Stage 2 is "in process". This means somebody besides the author has looked at the item, and submitted the first set of comments. The expectation is that an item will spend a LOT of time in this stage since this is where all the community interaction takes place. This does not need to happen fast, it only needs to happen well.

    Stage 3 is "Approved" This means that N reviewers have looked at the code and submitted comments. The comments have been discussed and determined "do this" or "no work needed". All reviewers have accepted the comments as resolved.

    Notice that this does not specific who changes the code. Also, each change is a new piece of code, so there should be a zillion "edit" version of the code, and the final version that includes all the changes is the one that gets approved.

    Also, the comments would contain questions like "What does this do?" and "How do we show that it does this?". This would address testing and requirements.

    At stage three, somebody can decide if its gold.
  • prof_brainoprof_braino Posts: 4,313
    edited 2012-12-29 07:49
    SRLM wrote: »
    Bronze: N votes for inclusion
    Silver: Meets formatting and basic documentation guidelines (follows defined convention, has example programs, etc.) <- Matches most "good" objects nowdays
    Gold: Includes unit testing and documented requirements

    This is very good, I would suggest that these ratings be applied AFTER considering the steps in post #46.
  • prof_brainoprof_braino Posts: 4,313
    edited 2012-12-29 08:01
    eiplanner wrote: »
    I must have missed a few threads way back somewhere...Why doesn't Parallax just do this with the OBEX? ... just another OBEX but with higher standards. ... it wouldn't take long for the process to become overwhelming for several people that are already very committed to this forum. ...why isn't Parallax implementing it?

    Just a novice speaking my 2 and a third cents.

    Good comment.

    As SLRM said. Parallax started this last summer, and we only got so far, and were overcome by events. We let our brains simmer for a while and are trying a new tack. Summer 2012 effort was an email discussion between Parallax and selected interested members. We looked at the wide variety of options and the potential effort involved in each.

    This IS part of what parallax is doing, in that the team invited by Parallax is involved. We are running this and other experiments. This particular experiment examines how far can we get with the effort run by the community. There is a question whether even this community can play together that well. I say it will work if we take it slow.

    I'm a process guy. I know how to do this. Its easy once we get it started. But there are a couple of little bits to consider before before we try to jump off a cliff and expect to fly. We'll get there.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2012-12-29 10:43
    So, maybe I'll go make three branches - Bronze, Silver, and Gold. We'll still have the same process for submission to each branch, but we'll relax the requirements on the lower levels, and re-do the submission process to upgrade a driver/module from one level to the next.
  • CircuitsoftCircuitsoft Posts: 1,166
    edited 2013-01-02 19:09
    One of the reasons that I set up Gerrit, is that it is very good for, essentially, code mentoring. Is that something people, in general, are interested in?
  • SRLMSRLM Posts: 5,045
    edited 2013-01-02 19:33
    One of the reasons that I set up Gerrit, is that it is very good for, essentially, code mentoring. Is that something people, in general, are interested in?

    What do you mean by "code mentoring", and what are some example cases?
  • pedwardpedward Posts: 1,642
    edited 2013-01-02 23:54
    I'm a little miffed at Doug's marginalization of what we did as a group, he seems to have characterized a lot of work into a few words.

    Here is the whole document we collaborated on, it spells out 90% of the "needs" for code, it is only awaiting a fairly simple element, what document tags should be used (like Javadoc) and a parser to pull those tags.

    https://docs.google.com/document/d/1zayL7eLZ4CZq4imdvZGcO9ifd7Q_FciJBqgm0b7dmE0/edit
  • prof_brainoprof_braino Posts: 4,313
    edited 2013-01-03 08:23
    SRLM wrote: »
    What do you mean by "code mentoring", and what are some example cases?

    We as the community can define how this applies to "us".

    "Code Mentoring" can mean teaching ourselves what we tend to want to see in our code, and establish something specific as our "standard". This would be based an anything we may have seen before (as examples of what to do or not do), or anything else we choose.
  • prof_brainoprof_braino Posts: 4,313
    edited 2013-01-03 08:36
    pedward wrote: »
    I'm a little miffed at Doug's marginalization of what we did as a group, he seems to have characterized a lot of work into a few words.

    Here is the whole document we collaborated on, it spells out 90% of the "needs" for code, it is only awaiting a fairly simple element, what document tags should be used (like Javadoc) and a parser to pull those tags.

    https://docs.google.com/document/d/1zayL7eLZ4CZq4imdvZGcO9ifd7Q_FciJBqgm0b7dmE0/edit

    This is not to marginalize that work in any way. It was very good and very thorough. It is merely to point out that the bulk of the work remains.

    I don't know if I'm saying this correctly, but I'll try again. Code does not exist of its own right. It must also have a "reason d'etre", some sort of requirements. It must also have way for showing that these requirements have been fulled. One such requirement is a description of the hardware needed (beyond the code's implementation language itself, code usually requires some specific hardware which is NOT part of the off the shelf prop board); we need a description of the required hardware circuits, parts etc.

    This is the "engineering" that must be in place before code code has any relevance. This is what I mean by the engineering being 90% of the work, and code is only 10%.

    If we have an object to run a brain scanner, we can't tell if its any good unless we can build a brain scanner and put it thought its paces.

    A "Gold" object would describe the target in sufficient detail that we could set up an appropriate e circuit and perform some tests to prove that the object does or does not do what is intended.
  • jazzedjazzed Posts: 11,803
    edited 2013-01-03 09:32
    This is not to marginalize that work in any way. It was very good and very thorough. It is merely to point out that the bulk of the work remains.

    Do you have a requirements list for that bulk? Does everyone agree to it?
    I suggest that if you think it's important, that you write it and take this thing out of limbo.

    I thought the document was pretty good except for that bit about MIT or Creative Commons. MIT is the only option.
  • SRLMSRLM Posts: 5,045
    edited 2013-01-03 10:43
    The GoldStandardChecklist document mentions the Spin2HTML tool, but it's a bit premature. We wanted the the "SpinDoc" format to be as similar to JavaDoc as possible, which Parallax Education is using for their PropGCC code (via the Doxygen tool). However, the Spin2HTML tool doesn't support the tags, and the effort to add Spin to Doxygen looks pretty intense. Still, we wanted to specify the documentation format so that the tool could be made later, and used to convert already made objects.
  • jazzedjazzed Posts: 11,803
    edited 2013-01-03 14:08
    SRLM wrote: »
    The GoldStandardChecklist document mentions the Spin2HTML tool, but it's a bit premature. We wanted the the "SpinDoc" format to be as similar to JavaDoc as possible, which Parallax Education is using for their PropGCC code (via the Doxygen tool). However, the Spin2HTML tool doesn't support the tags, and the effort to add Spin to Doxygen looks pretty intense. Still, we wanted to specify the documentation format so that the tool could be made later, and used to convert already made objects.

    I've used Doxygen with SPIN before. It requires a function signature within a comment to work, so it's a bit of a hack.
    I.E. ' int main(int argc, char *argv[]); ... defining types for the parameters in this way might actually help though :)

    Shirley (wink) someone (not me) could convince Phil to just do the "Right Thing(TM)" with his tool :)
  • prof_brainoprof_braino Posts: 4,313
    edited 2013-01-03 18:22
    jazzed wrote: »
    Do you have a requirements list for that bulk? Does everyone agree to it?
    I suggest that if you think it's important, that you write it and take this thing out of limbo.

    I thought the document was pretty good except for that bit about MIT or Creative Commons. MIT is the only option.

    I'm still not being clear, I guess.

    The majority of the work that needs to be done for any item to acquire a "gold" classification or any type of claim of fitness for any particular use is:
    • We have a statement of what the code is supposed to do
    • We have a statement of how we can check whether or not the code does this
    • We have some sort of confidence that others agree they can understand the requirements, and feel the tests actually confirm or deny that the item does as intended.

    That's it. That's all that is needed. The code itself is trivial compared to this, and moot if this is not present. This is fairly clearly stated, and pretty much straight out of the literature, and is not in limbo. Ask me any questions about this, I can maybe answer them, or point to where the answers may be found.
  • SeekerSeeker Posts: 58
    edited 2013-01-03 19:43
    As a "quiet" member and user... I was hoping this might go someplace. Not in the usual debate over X/Y/Z of what one faction thinks, but in the SPIN/PASM areas to make finding an object to use/misuse easier for everyone, not in each members personal favorite cause. New and not so new users need a new OBEX, not a new way of defining the universe of the Prop. I guess I was wrong.

    I mean no disrespect to anyone... but this has become WAY too complicated to use already... not focused on SPIN/PASM (other methods focus on C. Why do we need a C obex? C is not even a complete project yet) and still bogged down by the same debates.

    Feed the real users/future users of Prop and Prop2. For now, it is Spin and PASM. Or is FORTH the way of the future? I thought BUFFALO for the 68HC11 was IT! But it is not for the Prop. Though it would make a great emulation! Why are we deviating so far from the intended language of the Prop? Shouldn't the focus be on the native tongue?

    So I am back to working in silence... all hopes for progress on this endeavor pretty much lost as before. I read looking for better, but I see where this is already going. I had hopes, but I see this debate is/has lost any meaning to me... a day to day user with mostly personal, but one or two a year professional projects.

    Alienate your audience with your endless debates that go nowhere... even us low volume ones... and you may have won the battle. But you lose the war.

    Off my soapbox and back to the keyboard. Too much like politics in modern times.

    Too much talk about what some in power want with no regard what the users might need or want.
  • pedwardpedward Posts: 1,642
    edited 2013-01-03 20:14
    Doug,

    You're trying far too hard to quantify the creative process. If someone wants to write an object to do X, they will write it to fit some need they have at the moment, not some general need that exists on paper. Developers are fundamentally trying to solve their immediate problems, then they share their solution with others. It's at the sharing point that we get to see and "grade" their submission according to a standard.

    I purposefully stated that I wanted to keep the GS very light because the 9/10ths of software development is dictated by the personal drive and creative process in each developer. You don't tell a developer they have to quantify their vision before they begin work, that's for folks working government jobs!

    Good developers will weigh in on bad code when they see it, the checklist we came up with is just a minimal amount of process to ensure someone has actually read the code and didn't just rubber stamp it for inclusion.
  • ctwardellctwardell Posts: 1,716
    edited 2013-01-03 20:48
    I like the level system that has been mentioned, we will never come to agreement on GOLD from the get go.

    I think all levels should require the specification and test docs, they can be refined as it proceeds, but they are best practices and as such I feel should be used at every level.

    Initial levels would require that the object meet the specification and pass the tests.

    Higher levels would look more at implementation details and coding best practices.

    C.W.
Sign In or Register to comment.