Shop OBEX P1 Docs P2 Docs Learn Events
Good coding habits for HLL (was 'The Merits Of Assembly Lang.). - Page 3 — Parallax Forums

Good coding habits for HLL (was 'The Merits Of Assembly Lang.).

13»

Comments

  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 13:05
    Patatohead:

    Yes for things that always will remain small it works well to minimize the commenting. Now for large projects involving multiple developers, the level of documentation I mention really is not quite enough.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 13:06
    Would you consider Linux large?
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 13:10
    If you may note what I said, this method gives its self to writing code quickly with out thought because the comments can be quickly applied to the header after the routine is written, though before it is submitted to the code base, the external documentation can almost be copied directly out of the comment headers for the routines, and you get fairly low rate of bugs, and high efficiency. Now I do know people that do go to far to the extreme in commenting and documenting, and that will slow you down.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 13:14
    If you include the standard modules, then yes Linux is large (one of the biggest kernels). Though Linux also has a number of known bugs that this method would have helped to eliminate, if it had been used.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 13:17
    Ok I missed your end comment above.

    I agree that no method is all inclusive. These are guides to help us along the way, there will always be situations that they do not apply to.
  • potatoheadpotatohead Posts: 10,261
    edited 2015-07-01 22:35
    But it does require thought. Consider a change to one line of code. There is writing that change, then updating all the documentation. On a project, that effort could be traded for some general guidelines that allow for some variance in resource use, and that would be a perfectly rational consideration.

    That would "bloat" things some, but there is the balance again. Does the bloat matter? Sometimes it does, sometimes it doesn't. Now the clowns that took that to the extreme on my Thinkpad, clearly abused things. Not cool.

    I have a hard time characterizing those efforts as mandatory ones and efficient. Depends on the requirements.

    A recent case in point:

    Baggers and I worked on a sprite / tile driver. He had knocked up some great sprite draw routine, and I was working that way with a tile driver for VGA and TV. When we combined the efforts, we found that it didn't pay off to fully document the sprite code. It was enough to understand it's inputs and outputs. I did go through and parse his VGA driver piece, so that I could modify the TV driver piece to operate with the sprite driver module.

    The documentation effort you describe would have exceeded the time he spent authoring the sprite code! We may well have not gotten it, had that been a mandatory thing. Time was a consideration on his end, so what is one to do? Not author the code? IMHO, that's not really practical. Authoring a similar routine would have taken me considerable time, and there is a great chance that I would not have actually authored one that good. Almost sure chance, actually.

    Secondly, for his requirements, that level of documentation was not necessary, as his skill is such that it's enough to read the code. Mine was not. So, then, my investment in parsing and commenting paid off because I could then use the sprite code, and we both could combine the two, which was not originally on the table when the code was authored by either party.

    This happens all the time. Again, if there is one holistic effort to define requirements, the stuff you are advocating for can really pay off. Often there isn't, so then we must operate at the boundaries.

    Edit: Yep! :) We are on the same groove, I think. Ain't latency a ***** sometimes? LOL!!

    And again, the stuff you advocate isn't bad stuff. Knowing it's there can matter, as any extremes or ideologies do, because they are good reasoning tools. Nice to have them, consider where one is, then take the good bits, and realize a strategy that makes best sense. That's more or less my normal mode in things. Love the abstract discussion for that reason. It's a great way to convey things that add considerably to ones "life experience tool box". All good here.

    Another edit: I'll end with a basic potato-truism:

    The more bits of information that have to pass through ones grey matter, the higher the chance of error. Handle a ton of numbers, for example, and one of them will be wrong. This is quite possibly the highest value computing has. If we know we are to add up 1000 things, and we actually handle 1000 things, a mistake can and will be made. On the other hand, realizing we have to do that add, factoring in the use case requirements, then automating that add, means the grey matter only handles the higher level abstraction, and errors are significantly reduced.

    Managing all the docs does bring a lot of detail through the brain, and a error there is just as painful as one in the code is, if the default assumption is to write to the docs, and not the actual code. Given that, the code really needs to be the primary consideration, and that means personal investments that help one operate directly with code pay significant dividends, as my brief work with Baggers clearly showed me.

    (and I don't mean to put him on some pedestal --it's just his particular experience with development on lots of different things really highlights the strength of that observation, and I don't have other material handy with which to support it, that's all)
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 14:47
    Patatohead:

    Ah, yes.
    We are definitely on the same page. Generally speaking the smaller the project the less important the level of documentation, the larger the project the more important the documentation. Though the algorithm always comes first.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 15:45
    To give an example here is a the comments that will become a function that I am working on as we speak. Still have not finished out the algorithm:
    /********************************************************************************************************************
    * Function name: v_pline
    * Discription:
    **           Takes a list of points and the number of points, and draws connecting lines, ending with the last
    **            point.
    *Stack: 
    **           Uses 2 shorts, 1 pointer for parameters, and 6 shorts for variables, the functions called use up to
    **            three sorts for parameters, and up to 2 shorts for variables.
    **            This gIves a STACK TOTAL of 38 bytes max.
    *Parameters:
    **            handle:  Is a short and refers to the handle of the current output device.
    **            pcnt:  the number of points to connect.
    **            *ptlst:  pointer to an array of pcnt points each being two shorts the first for the x position the second
    ***                      for the y position.
    */
    v_pline(handle, ptcnt, ptlst)
    short handle, pcnt, *ptlst;
    {
      // Needed variables.
      // Get starting point.  starting point is passed as (ptlst[0], ptlst[1]) and store in (lstptx, lstpty).
      // Xet lspt to offset of last points x value.
      // Begin loop through each starting from (ptlst[2], ptst[4]) through (ptst[lsptx],ptlst[lspty]) using x and y.
        // Set cptx to ptlst[x] and cpty to ptlst[y].
        // Determine if cptx is equal to lstptx.  if yes:
           // Call vline with x=cptx, y0=lstpty, y1=cpty.
        // Otherwise determine if cpty is equal to lstpty, if so:
          // Call hline with x0=lstptx, x1=cptx, y=cpty.
        // Otherwise continue.
        // ***************************************************************************
        // *** need to determine best omnidirectional line drawing algorithm. ***
        // ***************************************************************************
        // Set lstptx to cptx, and lspty to cpty.
      // end loop
      return 0
    }
    

    The way I count lines of code this is currently 0 lines of code.
  • potatoheadpotatohead Posts: 10,261
    edited 2011-05-28 15:52
    Wouldn't the product of the algorithm work, impact the presumptions you've commented so far?
  • kwinnkwinn Posts: 8,697
    edited 2011-05-28 15:54
    Heater. wrote: »
    kwinn,

    Did you use Intel's conv86 utility to translate 8085 asm code to 8086?

    Yes, indeed I did, and wasn't it a delight (not). Ended up turning off strict flag setting, carefully inspecting the code, monitoring the resulting output when it ran, and then finding the remaining bugs. All in all I think it may have been faster to rewrite it. In 20/20 hind sight of course.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 16:06
    Patatohead:
    It is possible that I may add a variable or less one by the time it is done, though that is a small edit to the header, just change a number or two. The other two functions are already complete.
  • HollyMinkowskiHollyMinkowski Posts: 1,398
    edited 2011-05-28 16:23
    Someone mentioned the 16kb asm OS that was up on hack-a-day earlier.

    That would be great for when you need to use a motherboard as though it was
    a uC. I have done that and the power you have available is awesome.

    Not sure how much better this custom asm OS would be than just running
    Linux.

    You can customize the asm code in the BIOS flash chip of a motherboard
    and if your application is small enough you can fit it all in there. The BIOS routines
    themselves are already a sort of bare-bones OS that you can build upon. Or
    add a SD card as a boot drive, the BIOS will run the boot sector code from the SD
    once it jumps through the reset vector and does the hardware setup. You could
    put loads of asm on the SD to run your application and store all your data back to it.
    You can use something like a pico power AVR to wake the motherboard when it's needed
    so it does not have to run all the time and waste power. We did that once and it worked
    very well. The AVR would wake the motherboard when needed and it would gather
    data using a custom I/O card, then do the heavy math required and store results to
    the SD card and send an alert if the data justified it.

    It was easy to repair/swap out the systems since all you needed was extra copies of the BIOS
    flash, a stock 150.00 motherboard, an I/O card and an SD card. Total cost of a replacement
    was about 700.00...most of that for the I/O card.

    That was a few years back and I'm certain that job could be handled now with a fast ARM
    but newer and faster multi-core motherboards should pack a real punch as an embedded
    controller :-)

    With a motherboard that has a really fast cpu you have a LOT of
    time to play inside your interrupt code! It's nothing at all like the
    limitations when doing interrupt coding on a 20mips uC.

    The macro assemblers for x86 are free, there are 3 major ones.
    MASM, TASM and NASM NASM is open source and quite good.

    It's been about 3 years since I touched x86 asm so I'm pretty rusty
    on it now.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 16:50
    Holly:
    If you need that type of processing power for a project why not use an AMCC PowerPC 440EP, they retail at $53.00 USD for the 533MHz part? For newer projects that need that type of power this makes more since than using a PC MoBo or ARM.
  • schillschill Posts: 741
    edited 2011-05-28 17:02
    I would think that replacing the first post of this thread with completely different text after the discussion has developed (and many people have commented) trashes the "documentation."

    I consider this a pretty bad practice. Much worse than not commenting every line of code.

    If nothing else, can't you put a comment in the first post saying what you did. The current text only makes sense if someone has been following this thread from the beginning (when it was first created). Anyone who starts reading it now will not understand the direction the first comments are taking.
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2011-05-28 17:29
    Schill, I agree -- not to mention changing the title to something entirely different. The thread's logical sequence has been completely obliterated.

    David, if you must alter great swaths of commentary, use the strikethrough bbCode tags, [noparse] ... [/noparse], around the original text so people can see what was deleted.

    -Phil
  • HollyMinkowskiHollyMinkowski Posts: 1,398
    edited 2011-05-28 18:54
    Holly:
    If you need that type of processing power for a project why not use an AMCC PowerPC 440EP, they retail at $53.00 USD for the 533MHz part? For newer projects that need that type of power this makes more since than using a PC MoBo or ARM.

    They had a specific off the shelf high speed
    I/O card they were accustomed to using and wanted
    a motherboard so they could easily use that card.

    The motherboard used an AMD cpu at about 2ghz
    speed. The signals this setup was monitoring were
    high speed and very complex. I assumed there were
    better options that could do this in a smaller package
    but I know very little about hardware and had no input
    other than about the software.

    The task was monitoring of data that was sent intermittently
    through a fiber optic cable.
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 19:07
    Yes i should have beter noted the change. Will correct. (original lost)
  • davidsaundersdavidsaunders Posts: 1,559
    edited 2011-05-28 19:12
    Top post changed to better reflect the change. At least now it is known that the thread began on a different view on this topic.
Sign In or Register to comment.