As a "quiet" member and user... I was hoping this might go someplace. Not in the usual debate over X/Y/Z of what one faction thinks, but in the SPIN/PASM areas to make finding an object to use/misuse easier for everyone, not in each members personal favorite cause. New and not so new users need a new OBEX, not a new way of defining the universe of the Prop. I guess I was wrong.
I mean no disrespect to anyone... but this has become WAY too complicated to use already... not focused on SPIN/PASM (other methods focus on C. Why do we need a C obex? C is not even a complete project yet) and still bogged down by the same debates.
Feed the real users/future users of Prop and Prop2. For now, it is Spin and PASM. Or is FORTH the way of the future? I thought BUFFALO for the 68HC11 was IT! But it is not for the Prop. Though it would make a great emulation! Why are we deviating so far from the intended language of the Prop? Shouldn't the focus be on the native tongue?
So I am back to working in silence... all hopes for progress on this endeavor pretty much lost as before. I read looking for better, but I see where this is already going. I had hopes, but I see this debate is/has lost any meaning to me... a day to day user with mostly personal, but one or two a year professional projects.
Alienate your audience with your endless debates that go nowhere... even us low volume ones... and you may have won the battle. But you lose the war.
Off my soapbox and back to the keyboard. Too much like politics in modern times.
Too much talk about what some in power want with no regard what the users might need or want.
Hmmm... I'm not sure that I see any suggestions in this, just criticisms. Do you have a recommendations for what needs to be done?
The majority of the work that needs to be done for any item to acquire a "gold" classification or any type of claim of fitness for any particular use is:
We have a statement of what the code is supposed to do
We have a statement of how we can check whether or not the code does this
We have some sort of confidence that others agree they can understand the requirements, and feel the tests actually confirm or deny that the item does as intended.
That's it. That's all that is needed. The code itself is trivial compared to this, and moot if this is not present. This is fairly clearly stated, and pretty much straight out of the literature, and is not in limbo. Ask me any questions about this, I can maybe answer them, or point to where the answers may be found.
prof,
1) Can you do a sample of the statement and test docs for some simple object in the obex, or even some really simple object say out the PEK manual?
2) Once we have that you can pick another object and as a group we can work through creating those docs for the new object.
... but this has become WAY too complicated to use already...
Three points should NOT be too complicated. This is proven extremely effective, but it can be done right or done wrong. The biggest issue is not thinking outside the box, the issue is getting everyone to see the same box.
You're trying far too hard to quantify the creative process.
This is the basic misunderstanding, and the difference between consistent success through process, and occasional success through luck. This stuff is easy, it is known, and the only thing blocking the effort is (and always is) assuming we are as perfect as we can be (if you make this assumption, it becomes true).
This is NOT quantifying the creative process, which is unknown (or ambiguous). This is quantifying the ENGINEERING process, which is well known.
If someone wants to write an object to do X, they will write it to fit some need they have at the moment, not some general need that exists on paper.
If an engineer wants to do X, but X is not defined, X will never happen. You may end up with "something", and it may even be accepted, but its only a matter of luck. This is what we have today, in the OBEX, and in most workplaces.
The best engineers' process (this is what we are trying to communicate) involves defining exactly what is required, up front as planning, before any other work is done. Always. The most experienced engineers don't need to write it down, and it appears they just "magically" get it right. This is not the case. There is always planning and analysis. Doing this first is the most efficient way, doing this last is the least efficient way.
Developers are fundamentally trying to solve their immediate problems, then they share their solution with others. It's at the sharing point that we get to see and "grade" their submission according to a standard.
Again, if we do not have a clear statement of what the item is intended to provide, we have no way of deciding if it does or not.
* Requirement - it needs to do this: ...
* Test - Does it do that? ...
THIS is what we check for holes. Forget about the code until we know what its supposed to do. Otherwise we have the OBEX all over again.
Good developers will weigh in on bad code when they see it, the checklist we came up with is just a minimal amount of process to ensure someone has actually read the code and didn't just rubber stamp it for inclusion.
The checklist in the gold standard only covers form, not functionality. Code could easily meet all points of the checklist, and still not function. Or function in single case, and not function is any other.
What do "Good" and "Bad" mean, to the extent that everyone reaches the same evaluation on the same item? If "good" means "testable, such that the test is answerable yes/no or true/false", then we would all get the same answer for every item. Then we have and objective evaluation, rather than a subjective evaluation and a lot of arguments and discussions.
The only trick is stating requirements such that they can be tested, and the tests answered by a yes/no evaluation. This is very simple once you get used to it. It makes it very difficult to make mistakes. This is not language specific, it applies to any software development.
1) Can you do a sample of the statement and test docs for some simple object in the obex, or even some really simple object say out the PEK manual?
2) Once we have that you can pick another object and as a group we can work through creating those docs for the new object.
At last! The invitation! Now we can get to work. This is a game for the whole group to play, it is fun. (if its not fun, you are doing it wrong, please watch until you catch on). Here's how this works:
SQE (me, in this case) monitors the process. The engineers (you) are the technical experts. I ask a lots of questions (and some may seem dumb, and some will be) and we answer them. That's really all we need for right now. I will step us through a standard process, and we will tailor the process to our needs. We iterate through the process, until we think we decide we are done. The process itself is simple, tailoring takes some skill and a little time.
Question 1: What is the function?
C.W. (or anyone else) , please nominate any function that you are interested in. Could be already in the OBEX, could be anything at random. But it helps if you are interested, as one has little motivation if they are not planning to use the function. Smaller is better, popular is better.
Braino, can I nominate the serial driver I'm already working on? It's not functional at the moment, but the discussion of exactly what it should do will be an excellent exercise.
Braino, can I nominate the serial driver I'm already working on? It's not functional at the moment, but the discussion of exactly what it should do will be an excellent exercise.
This is fine. In fact, the process should start at this point. The earlier in the cycle the better. Can you post a link to your code, and to the reference(s) you are following? There should be at least two so far.
This is fine. In fact, the process should start at this point. The earlier in the cycle the better. Can you post a link to your code, and to the reference(s) you are following? There should be at least two so far.
It's in Gerrit, and I think you've already comment on it, if not on my attempt at a README doc. I'm not sure what you mean by reference(s), but it's an attempt to port Lonesock's ffds1 driver.
I'd like to nominate the new W5200/W5100 driver and libraries. The source code can be found on Google code. I don't want to speak for Parallax and Wiznet but I'd venture a guess they would be very appreciative for continued community involvement.
The following libraries are thoroughly community tested. Special thanks to twc and Igor_Rast for the many hours of dedicated testing and detailed feedback. Details can be found in the Spinneret forum.
Dhcp.spin
Dns.spin
Socket.spin
SpiCounterPasm.spin
W5200.spin
This is a layered application where a socket or socket types (DHCP, DNS) exposes a virtual socket to the programmer.
Socket.spin: Warps up W5200.spin method calls to produce a virtual socket.
By the way, if you want to download code without needing to set up Git, click the "(gitweb)" link next to the patch set, then click on "snapshot" next to the "tree" line at the top.
We need to work on the instructions. I can't tell if I'm doing this right yet. A new person has to be able to get going, I am going to be the worst case example of a new person.
I don't want GIT installed, I work from several different PCs and tablets, etc. Can I simply view code and submit comments?
I don't want to clone the repository, as I am a reviewer not a contributor. I only need to see that the repository exists, and what folks are doing in it. Its this possible
Clicking on "Gitweb" gives me
HTTP ERROR: 404
Problem accessing /parallax/gitweb. Reason:
Not Found
File is named DEVEL.txt, are these the instructions? I would suggest naming them Instructions.txt or Setup_Instructions.txt or ReadMe.txt
Does BASE mean current version, or original version?
If I need to write an object to talk to a chip, say a DS1302, I don't sit down and say "ok meow, I'm gonna write down all possible routines I'm going to need, then I'm going to define the exact calling API, then I'm going to show psuedo code". That's the stuff you do when you're either tackling a huge problem, or you are in school and it's a function of showing your work to the teacher.
The real process goes like this:
Read datasheet
determine pinout and clocking protocol
write first whack at send/receive function implementing protocol
attempt to call simplest chip function with new code
verify result is what datasheet indicates
rinse...repeat and debug comms
write high level function for each chip function
write demo/test program to use the functions
I've enumerated the thought process for writing a DS1302 driver, something I did as part of the first accepted gold standard object (the only community contributed GS object) for the SPIN_SPI object.
I learned there was a bug in the DS1302 in that when it's deselected it doesn't float the data line. I also learned a few other bits not documented in the datasheet correctly.
I'd like to nominate the new W5200/W5100 driver and libraries.
Dhcp.spin
Dns.spin
Socket.spin
SpiCounterPasm.spin
W5200.spin
The code base is rather large, built by a single developer, and requires hardware. Therefore, it might not be the best candidate to kick the tires.
This is also a good candidate. As you note, it is a large item. Each individual spin item should be reviewed individually before reviewing the whole. Which one of these items would you consider the starting point?
General process note: Typically, several items will be in process at any given time. Each item will have its own set on reviewers, as different sets of folks will be interested and available for any given item. The reviewers can participate in any items they wish and have time for.
HTTP ERROR: 404
Problem accessing /parallax/gitweb. Reason:
Not Found
I ran into that once, logging in to Gerrit will fix that error. I'm in the process of making it accessible without logging in first, but I have other priorities at the moment, and there are a fair number of steps there.
If I need to write an object to talk to a chip, say a DS1302, I don't sit down and say "ok meow, I'm gonna write down all possible routines I'm going to need, then I'm going to define the exact calling API, then I'm going to show psuedo code". That's the stuff you do when you're either tackling a huge problem, or you are in school and it's a function of showing your work to the teacher.
The real process goes like this:
Read datasheet
determine pinout and clocking protocol
write first whack at send/receive function implementing protocol
attempt to call simplest chip function with new code
verify result is what datasheet indicates
rinse...repeat and debug comms
write high level function for each chip function
write demo/test program to use the functions
I've enumerated the thought process for writing a DS1302 driver, something I did as part of the first accepted gold standard object (the only community contributed GS object) for the SPIN_SPI object.
I learned there was a bug in the DS1302 in that when it's deselected it doesn't float the data line. I also learned a few other bits not documented in the datasheet correctly.
The discovery process you layed out is very typical and valid for a task like you mentioned, but before it is presented as an object it needs the docs, otherwise how can it be tested as fit for purpose?
I see it as you are discussing the research and development phase where the obex is for product refinement and deployment.
[QUOTE=pedward;1155421If I need to write an object to talk to a chip, say a DS1302, I don't ....[/QUOTE]
Very good. We want to know what you don't do, so we too can skip it if its unneeded, or do it ourselves if it is needed. We want to know what we ought to do to get the best results.
The real process goes like this:
Read datasheet
Yes, This is the first step.
determine < a bunch of stuff >
do investigation
run some experiments
verify result is what datasheet indicates
rinse...repeat and debug
So, the requirements are "do these function described on the data sheet"
And the tests are "verify the functions from the datasheet work".
So far we are in total agreement. This describes a good general process.
I've enumerated the thought process for writing a DS1302 driver, something I did as part of the first accepted gold standard object (the only community contributed GS object) for the SPIN_SPI object.
The difference is that here, some of the folks don't know what you did (in your head), and should not be expected to take it on faith that you did these things. So we want to get from a thought process to a group process. We want some indication that the required work (that you described as informal steps) actually happened, to give us confidence that the item will work as required.
I learned there was a bug in the DS1302 in that when it's deselected it doesn't float the data line. I also learned a few other bits not documented in the datasheet correctly.
And you gain some knowledge. BUT the rest of us (me at least) did not get that knowledge. The idea is that these items would be captured in the SPIN_SPI object for the NEXT guy to be aware of, when the information is pertinent. The next person would be able to understand what you found, and how you arrived at your decisions, and hopefully be able to produce work of the same quality, by following a similar process.
None of this asks you do do anything you don't already do. It just anticipates that someone is going to ask "Did you read the datatsheet", and "Did you check that it works", etc; and arranges the material in a way we can easily see if its there or not.
pedward, dead nuts... that's how I do it. I'm sure we're not the only two...
It seems to me that this "Gold Standard" project is suffering from scope creep. It's one thing to write quality, reusable, and documented black box source. It's quite another to educate folks on the finer points of an SPI bus or the DHCP protocol.
Ideally, a test harness will be much smaller and easier to understand than the black-box code it's supposed to support. The test harness should also present itself to the user as an example of how to use the function/object in question.
Ideally, a test harness will be much smaller and easier to understand than the black-box code it's supposed to support. The test harness should also present itself to the user as an example of how to use the function/object in question.
Agreed... However, there is an assumed level of knowledge. While an SPI bus is straight forward the possible protocols can be and are very different.
So, the test harness for a SPI interface could be a program to read/write from a 93C46. That's cheap hardware, simple command set, and a known, testable interface. Or, simpler yet, an HC589 (I think is SPI compatible) will delay any output data by 8 clocks.
So, the test harness for a SPI interface could be a program to read/write from a 93C46. That's cheap hardware, simple command set, and a known, testable interface. Or, simpler yet, an HC589 (I think is SPI compatible) will delay any output data by 8 clocks.
And that might be fine... SPI is simply the mechanism I used to convey my thoughts.
Still...feels like scope creep. There's some nebulous barrier stopping Gold Standard work. I'm not sure why. Heck, the documentation I read in a previous post seemed rather clear.
What did twc say, ...pick something and kick the tires...?
What did twc say, ...pick something and kick the tires...?
That's what I set out to do here, but I can't form a community process without a community. So, I'm trying to implement some mechanism, taking process advice from prof_braino, and provide a code sample or two to run through the process to see how it works, and refine the process as we do.
If a developer contributes an object to support chip X, it should be assumed the developer has done so, otherwise why would they go to the effort?
If I simply state: "here is an object to support the DS1302 chip", why would you read anything more into it? Why do I need to provide some nauseating level of details as to how I arrived at that?
The checklist clearly states that you should provide example code to act as a demo and test harness, to demonstrate the nuances of interface (order of operations, etc).
The reviewer is responsible for ensuring that the object looks reasonably complete and implements an interface to the DS1302. Furthermore, in these situations the reviewer would ensure the submitter is using an approved SPI object for interfacing to the chip and isn't bound to using only one, unless timing critical issues are present.
What I'm getting at is that much of the approval could be nauseatingly documented, but it would leave holes and probably be too rigid to accept good, yet different style, code.
It is the responsibility of a seasoned developer to review and approve any code entering a big project, that's why there are project stewards.
I see so much "but it must have X for the process", yet I don't see you identifying and writing down ideas. You appear to hit on a few things, rather than contribute to the creative process, which in this context is unhelpful.
It would be equivalent to a senior developer saying, "nope, I don't like that, but I like this", and not offering any feedback on how to make something better or why they like or dislike something. You cannot simply say "it needs process" and not contribute process where you see it needed.
I've been spending the past week working on getting unit testing for C/C++ (both!) working with PropGCC, and experimenting with TDD in general (I've never tried it before). It's come out very nicely, and if there is interest I can post a how-to once I get the process cleaned up.
As far as unit testing on the Propeller, I've come to the conclusion that I'll need to make a hardwired board with a bunch of peripherals to allow for automated testing. So far, I've tested (in C++) FSRW, FFDS1, and I'm working on a custom I2C driver. The SD card required 4 pins, the FFDS1 required a resistor between two pins, and the I2C requires an EEPROM, an L3GD20 gyro, and a MS5611 barometer.
One of the challenges that I have run into is how to test things like the I2C bus or the SD card. This is due to the fact that the external devices can be put into a bad state at the end of one test, and fail subsequent tests. I don't think there is any way around this except to not have tests fail, or possibly to power cycle the entire system on every test. The power cycling seems like a difficult option, so I've opted to do the following:
Using I2C as an example, I've put the four low level routines into their own class:
Start(), Stop(), SendByte(), GetByte()
With these four routines, any I2C device can be communicated with. These routines have to be tested by hand, with a logic analyzer or oscilloscope to verify correct behavior. I've also tested them with external chips, just to make sure that reality matches the theory.
From there, I have a wrapper class around those four routines that deals with the specific sequences to read/write bytes to a device. So far, I have found three different "protocols" for how a byte needs to be written or read (to say nothing of a multi-byte transfer). This wrapper class can now be tested without any hardware at all, instead using a mock object that provides the four I2C functions. I haven't gotten that far yet, though: currently I'm testing the wrapper object with the actual hardware.
In any case, I suspect that there may be similar situations for other protocols (SPI, ...).
If a developer contributes an object to support chip X, it should be assumed the developer has done so, otherwise why would they go to the effort?
If I simply state: "here is an object to support the DS1302 chip", why would you read anything more into it? Why do I need to provide some nauseating level of details as to how I arrived at that?
@pedward: OK, if you don't want to do it that way, don't. But please step out of the way while the ones that want to give this a try do so.
To answer your question, Why?, because we don't have any reason to believe there is any support for the DS1302 until we try it. If we try the code an have a problem (which always happens) then we have to trace all your steps, and find where it went wrong. If you didn't leave us a trail of breadcrumbs, we will never find our way out of the woods short of redeveloping from scratch. Which is what we are trying to prevent by "Gold" standard code.
I see it as you are discussing the research and development phase where the obex is for product refinement and deployment.
As C.W. says. The complaint against the OBEX is that it is just code, we can't tell if its any good or not.
The software development process includes all the stuff BEFORE the code is written, this effort is to try to capture that "extra" that the OBEX lacks. Exactly what the extra is remains undefined. I state that the lacking facets are a clear statement of what the items is intended to do, and how to prove whether or not it does.
and experimenting with TDD in general (I've never tried it before)
I live by TDD and have been using the technique for a little over two years now. Maybe more... time flies these days.
The software development process includes all the stuff BEFORE the code is written, this effort is to try to capture that "extra" that the OBEX lacks. Exactly what the extra is remains undefined.
That's good on paper and not practical. Unless, 1) We're launching a rover to Mars. 2) Parallax ask for a library/driver with very specific INs and OUTs. Most of us are writing this stuff in our spare time with spousal permission of course. When the opportunity shows itself and I have the time to sit and code, I'll do as pedward outlined (which is TDD). I don't see myself writing specs for myself. That's goofy...
I think the process for process' sake approach will not lead to a useful conclusion. I favor productivity over superfluous process. I can spend 15 minutes looking at code to yield a high level assessment on the quality and the potential outcome from a selection of code.
Saying that a face value statement "Here is support for chip X" is invalid without process documentation is a ridiculous assertion.
Debugging of code is NEVER approached from "read some document the author wrote" because if the author wrote buggy code, the document is probably buggy too.
When trying to work with someone else's code, you instrument and test incrementally to your own satisfaction, that's the fast path.
I think the process for process' sake approach will not lead to a useful conclusion. I favor productivity over superfluous process. I can spend 15 minutes looking at code to yield a high level assessment on the quality and the potential outcome from a selection of code.
Saying that a face value statement "Here is support for chip X" is invalid without process documentation is a ridiculous assertion.
Debugging of code is NEVER approached from "read some document the author wrote" because if the author wrote buggy code, the document is probably buggy too.
When trying to work with someone else's code, you instrument and test incrementally to your own satisfaction, that's the fast path.
I seems to me then that the current obex is all we need. Done, that was easy.
I'm still all for doing a trial run using prof_braino's approach, maybe using full duplex serial as it is something we can test with no need for any SPI chips or anything like that.
This is Circuitsoft's thread and he has went to the effort of trying to make something happen so I think it's up to him to decide how he wants to proceed.
I don't think we should be including non-SPIN/PASM code in this discussion at all. The requirements are different.
This might mean you are doing something different from what I'm doing. The general process is defined as identical for any an language or environment, that is:
Say what you are doing (state requirements)
Say how you determine whether it does that (state tests)
Show that is does that or not (show results)
The only difference for SPIN or C would be "code it in SPIN" or "code it in C". The "engineering" part of the process would be the same for any tool choice.
Comments
Hmmm... I'm not sure that I see any suggestions in this, just criticisms. Do you have a recommendations for what needs to be done?
prof,
1) Can you do a sample of the statement and test docs for some simple object in the obex, or even some really simple object say out the PEK manual?
2) Once we have that you can pick another object and as a group we can work through creating those docs for the new object.
Thanks,
C.W.
Three points should NOT be too complicated. This is proven extremely effective, but it can be done right or done wrong. The biggest issue is not thinking outside the box, the issue is getting everyone to see the same box.
This is the basic misunderstanding, and the difference between consistent success through process, and occasional success through luck. This stuff is easy, it is known, and the only thing blocking the effort is (and always is) assuming we are as perfect as we can be (if you make this assumption, it becomes true).
This is NOT quantifying the creative process, which is unknown (or ambiguous). This is quantifying the ENGINEERING process, which is well known.
If an engineer wants to do X, but X is not defined, X will never happen. You may end up with "something", and it may even be accepted, but its only a matter of luck. This is what we have today, in the OBEX, and in most workplaces.
The best engineers' process (this is what we are trying to communicate) involves defining exactly what is required, up front as planning, before any other work is done. Always. The most experienced engineers don't need to write it down, and it appears they just "magically" get it right. This is not the case. There is always planning and analysis. Doing this first is the most efficient way, doing this last is the least efficient way.
Again, if we do not have a clear statement of what the item is intended to provide, we have no way of deciding if it does or not.
* Requirement - it needs to do this: ...
* Test - Does it do that? ...
THIS is what we check for holes. Forget about the code until we know what its supposed to do. Otherwise we have the OBEX all over again.
The checklist in the gold standard only covers form, not functionality. Code could easily meet all points of the checklist, and still not function. Or function in single case, and not function is any other.
What do "Good" and "Bad" mean, to the extent that everyone reaches the same evaluation on the same item? If "good" means "testable, such that the test is answerable yes/no or true/false", then we would all get the same answer for every item. Then we have and objective evaluation, rather than a subjective evaluation and a lot of arguments and discussions.
The only trick is stating requirements such that they can be tested, and the tests answered by a yes/no evaluation. This is very simple once you get used to it. It makes it very difficult to make mistakes. This is not language specific, it applies to any software development.
At last! The invitation! Now we can get to work. This is a game for the whole group to play, it is fun. (if its not fun, you are doing it wrong, please watch until you catch on). Here's how this works:
SQE (me, in this case) monitors the process. The engineers (you) are the technical experts. I ask a lots of questions (and some may seem dumb, and some will be) and we answer them. That's really all we need for right now. I will step us through a standard process, and we will tailor the process to our needs. We iterate through the process, until we think we decide we are done. The process itself is simple, tailoring takes some skill and a little time.
Question 1: What is the function?
C.W. (or anyone else) , please nominate any function that you are interested in. Could be already in the OBEX, could be anything at random. But it helps if you are interested, as one has little motivation if they are not planning to use the function. Smaller is better, popular is better.
This is fine. In fact, the process should start at this point. The earlier in the cycle the better. Can you post a link to your code, and to the reference(s) you are following? There should be at least two so far.
The following libraries are thoroughly community tested. Special thanks to twc and Igor_Rast for the many hours of dedicated testing and detailed feedback. Details can be found in the Spinneret forum.
This is a layered application where a socket or socket types (DHCP, DNS) exposes a virtual socket to the programmer.
File naming convention:
The code base is rather large, built by a single developer, and requires hardware. Therefore, it might not be the best candidate to kick the tires.
We need to work on the instructions. I can't tell if I'm doing this right yet. A new person has to be able to get going, I am going to be the worst case example of a new person.
I don't want GIT installed, I work from several different PCs and tablets, etc. Can I simply view code and submit comments?
I don't want to clone the repository, as I am a reviewer not a contributor. I only need to see that the repository exists, and what folks are doing in it. Its this possible
Clicking on "Gitweb" gives me
File is named DEVEL.txt, are these the instructions? I would suggest naming them Instructions.txt or Setup_Instructions.txt or ReadMe.txt
Does BASE mean current version, or original version?
If I need to write an object to talk to a chip, say a DS1302, I don't sit down and say "ok meow, I'm gonna write down all possible routines I'm going to need, then I'm going to define the exact calling API, then I'm going to show psuedo code". That's the stuff you do when you're either tackling a huge problem, or you are in school and it's a function of showing your work to the teacher.
The real process goes like this:
Read datasheet
determine pinout and clocking protocol
write first whack at send/receive function implementing protocol
attempt to call simplest chip function with new code
verify result is what datasheet indicates
rinse...repeat and debug comms
write high level function for each chip function
write demo/test program to use the functions
I've enumerated the thought process for writing a DS1302 driver, something I did as part of the first accepted gold standard object (the only community contributed GS object) for the SPIN_SPI object.
I learned there was a bug in the DS1302 in that when it's deselected it doesn't float the data line. I also learned a few other bits not documented in the datasheet correctly.
This is also a good candidate. As you note, it is a large item. Each individual spin item should be reviewed individually before reviewing the whole. Which one of these items would you consider the starting point?
General process note: Typically, several items will be in process at any given time. Each item will have its own set on reviewers, as different sets of folks will be interested and available for any given item. The reviewers can participate in any items they wish and have time for.
I named it that because they should be instructions only for use of Gerrit, and not necessary to just download/use the repo.
BASE is the original version of the file in question. If it's a new file, then the original version is "the empty file."
The discovery process you layed out is very typical and valid for a task like you mentioned, but before it is presented as an object it needs the docs, otherwise how can it be tested as fit for purpose?
I see it as you are discussing the research and development phase where the obex is for product refinement and deployment.
C.W.
Very good. We want to know what you don't do, so we too can skip it if its unneeded, or do it ourselves if it is needed. We want to know what we ought to do to get the best results.
Yes, This is the first step.
determine < a bunch of stuff >
do investigation
run some experiments
So, the requirements are "do these function described on the data sheet"
And the tests are "verify the functions from the datasheet work".
So far we are in total agreement. This describes a good general process.
The difference is that here, some of the folks don't know what you did (in your head), and should not be expected to take it on faith that you did these things. So we want to get from a thought process to a group process. We want some indication that the required work (that you described as informal steps) actually happened, to give us confidence that the item will work as required.
And you gain some knowledge. BUT the rest of us (me at least) did not get that knowledge. The idea is that these items would be captured in the SPIN_SPI object for the NEXT guy to be aware of, when the information is pertinent. The next person would be able to understand what you found, and how you arrived at your decisions, and hopefully be able to produce work of the same quality, by following a similar process.
None of this asks you do do anything you don't already do. It just anticipates that someone is going to ask "Did you read the datatsheet", and "Did you check that it works", etc; and arranges the material in a way we can easily see if its there or not.
It seems to me that this "Gold Standard" project is suffering from scope creep. It's one thing to write quality, reusable, and documented black box source. It's quite another to educate folks on the finer points of an SPI bus or the DHCP protocol.
Still...feels like scope creep. There's some nebulous barrier stopping Gold Standard work. I'm not sure why. Heck, the documentation I read in a previous post seemed rather clear.
What did twc say, ...pick something and kick the tires...?
If I simply state: "here is an object to support the DS1302 chip", why would you read anything more into it? Why do I need to provide some nauseating level of details as to how I arrived at that?
The checklist clearly states that you should provide example code to act as a demo and test harness, to demonstrate the nuances of interface (order of operations, etc).
The reviewer is responsible for ensuring that the object looks reasonably complete and implements an interface to the DS1302. Furthermore, in these situations the reviewer would ensure the submitter is using an approved SPI object for interfacing to the chip and isn't bound to using only one, unless timing critical issues are present.
What I'm getting at is that much of the approval could be nauseatingly documented, but it would leave holes and probably be too rigid to accept good, yet different style, code.
It is the responsibility of a seasoned developer to review and approve any code entering a big project, that's why there are project stewards.
I see so much "but it must have X for the process", yet I don't see you identifying and writing down ideas. You appear to hit on a few things, rather than contribute to the creative process, which in this context is unhelpful.
It would be equivalent to a senior developer saying, "nope, I don't like that, but I like this", and not offering any feedback on how to make something better or why they like or dislike something. You cannot simply say "it needs process" and not contribute process where you see it needed.
I've been spending the past week working on getting unit testing for C/C++ (both!) working with PropGCC, and experimenting with TDD in general (I've never tried it before). It's come out very nicely, and if there is interest I can post a how-to once I get the process cleaned up.
As far as unit testing on the Propeller, I've come to the conclusion that I'll need to make a hardwired board with a bunch of peripherals to allow for automated testing. So far, I've tested (in C++) FSRW, FFDS1, and I'm working on a custom I2C driver. The SD card required 4 pins, the FFDS1 required a resistor between two pins, and the I2C requires an EEPROM, an L3GD20 gyro, and a MS5611 barometer.
One of the challenges that I have run into is how to test things like the I2C bus or the SD card. This is due to the fact that the external devices can be put into a bad state at the end of one test, and fail subsequent tests. I don't think there is any way around this except to not have tests fail, or possibly to power cycle the entire system on every test. The power cycling seems like a difficult option, so I've opted to do the following:
Using I2C as an example, I've put the four low level routines into their own class:
Start(), Stop(), SendByte(), GetByte()
With these four routines, any I2C device can be communicated with. These routines have to be tested by hand, with a logic analyzer or oscilloscope to verify correct behavior. I've also tested them with external chips, just to make sure that reality matches the theory.
From there, I have a wrapper class around those four routines that deals with the specific sequences to read/write bytes to a device. So far, I have found three different "protocols" for how a byte needs to be written or read (to say nothing of a multi-byte transfer). This wrapper class can now be tested without any hardware at all, instead using a mock object that provides the four I2C functions. I haven't gotten that far yet, though: currently I'm testing the wrapper object with the actual hardware.
In any case, I suspect that there may be similar situations for other protocols (SPI, ...).
@pedward: OK, if you don't want to do it that way, don't. But please step out of the way while the ones that want to give this a try do so.
To answer your question, Why?, because we don't have any reason to believe there is any support for the DS1302 until we try it. If we try the code an have a problem (which always happens) then we have to trace all your steps, and find where it went wrong. If you didn't leave us a trail of breadcrumbs, we will never find our way out of the woods short of redeveloping from scratch. Which is what we are trying to prevent by "Gold" standard code.
As C.W. says. The complaint against the OBEX is that it is just code, we can't tell if its any good or not.
The software development process includes all the stuff BEFORE the code is written, this effort is to try to capture that "extra" that the OBEX lacks. Exactly what the extra is remains undefined. I state that the lacking facets are a clear statement of what the items is intended to do, and how to prove whether or not it does.
That's good on paper and not practical. Unless, 1) We're launching a rover to Mars. 2) Parallax ask for a library/driver with very specific INs and OUTs. Most of us are writing this stuff in our spare time with spousal permission of course. When the opportunity shows itself and I have the time to sit and code, I'll do as pedward outlined (which is TDD). I don't see myself writing specs for myself. That's goofy...
I think the process for process' sake approach will not lead to a useful conclusion. I favor productivity over superfluous process. I can spend 15 minutes looking at code to yield a high level assessment on the quality and the potential outcome from a selection of code.
Saying that a face value statement "Here is support for chip X" is invalid without process documentation is a ridiculous assertion.
Debugging of code is NEVER approached from "read some document the author wrote" because if the author wrote buggy code, the document is probably buggy too.
When trying to work with someone else's code, you instrument and test incrementally to your own satisfaction, that's the fast path.
I seems to me then that the current obex is all we need. Done, that was easy.
I'm still all for doing a trial run using prof_braino's approach, maybe using full duplex serial as it is something we can test with no need for any SPI chips or anything like that.
This is Circuitsoft's thread and he has went to the effort of trying to make something happen so I think it's up to him to decide how he wants to proceed.
C.W.
This might mean you are doing something different from what I'm doing. The general process is defined as identical for any an language or environment, that is:
The only difference for SPIN or C would be "code it in SPIN" or "code it in C". The "engineering" part of the process would be the same for any tool choice.