Shop OBEX P1 Docs P2 Docs Learn Events
John Carmack on Inlined code, and other thoughts. — Parallax Forums

John Carmack on Inlined code, and other thoughts.

potatoheadpotatohead Posts: 10,261
edited 2014-09-27 17:11 in General Discussion
Stumbled into this while checking in on Hacker News (the YC user submitted items of interest to hackers site)

http://number-none.com/blow/john_carmack_on_inlined_code.html

In the spirit of the ongoing "Why Propeller?" discussions, and because Heater, I'll highlight this quote:
A year ago I was involved in a discussion about writing extremely reliable software for aerospace applications, as I have been several times over the years. Typically I am there to rail against the people that talk about using threads and an RTOS for such things, when a simple polled loop that looks like a primitive video game is much more clear and effective. However, this particular discussion brought up a new wrinkle for me:



>Indeed, if memory serves (it's been a while since I read about this)...
>
>The fly-by-wire flight software for the Saab Gripen (a lightweight
>fighter) went a step further. It disallowed both subroutine calls and
>backward branches, except for the one at the bottom of the main loop.
>Control flow went forward only. Sometimes one piece of code had to leave
>a note for a later piece telling it what to do, but this worked out well
>for testing: all data was allocated statically, and monitoring those
>variables gave a clear picture of most everything the software was doing.
>The software did only the bare essentials, and of course, they were
>serious about thorough ground testing.
>
>No bug has ever been found in the "released for flight" versions of that >code. > > Henry Spencer > henry@spsystems.net

Comments

  • Heater.Heater. Posts: 21,230
    edited 2014-09-27 00:26
    potatohead,
    ...and because Heater,...
    Heater here. What on Earth do you mean?

    Anyway, that snippet could well be describing the programming language Lucol that was, and still is, used in flight critical control systems. It's used in the 777 Full Authority Digital Engine Controllers (FADEC) of the Boeing 777 for example. Its used in actuator controls of the C130 transport. And so on.

    Lucol does not have subroutines. Or at least not user user definable ones. There are many control related functions one can call from ones code.

    There is no goto in Lucol or any construct for creating loops. Your code module starts execution at the top and proceeds through whatever conditional statements to some exit point.

    Lucol modules are run from timer ticks. Say 10 or 100 or 1000 millisecond period. If you want iteration you have to set up some state that is stepped forward every time you are called.

    Because of the lack of loops, goto, functions, Lucol is incredibly easy to test: Set up the variables it uses as input, call it once, see what happened to the output variables. Do that for a lot of interesting cases the code has to handle.

    Because Lucol has no looping constructs you can exactly calculate a modules maximum execution time. That means you can be sure that nothing in your system will exceed the amount of time it is supposed to execute in.

    In fact the compiler prints out the execution time of each module when it's done. A program has many modules and the compiler knows the run time budget of the complete ensemble.

    That is the only compiler I have ever seen that can do that.

    Lucol was over run by Ada for such applications. God only knows why. Ada is a million times more complex and that makes it impossible to reason about you programs behaviour. Especially in terms of timing. Not good in safety critical real-time systems.

    As Lucol was a pre-internet thing there is not much info on it to be found. But there are some docs around:

    These for example talk about parallel processors and Lucol.
    [urlhttp://link.springer.com/chapter/10.1007/BFb0020345[/url]
    http://books.google.fi/books?id=rEVsLVzL2C8C&pg=PA36&lpg=PA36&dq=lucol+language&source=bl&ots=iCKm2PAIev&sig=mvOS9Flqwvhv7J6KxYu14-ykvqc&hl=en&sa=X&ei=wF0mVL-7KqHQygP3sIHgDQ&ved=0CEcQ6AEwBQ#v=onepage&q=lucol language&f=false[url][/url]
  • Martin_HMartin_H Posts: 4,051
    edited 2014-09-27 06:47
    The Arduino programming model favors this style as well. The setups method does all of the one time initialization, and the loop method is polled by the main loop. You tend to write a set of functions that do simple operations that are knitted together by the main loop. Heck most of my Pbasic programs work in an identical manner.

    Now obviously a return from a function is a backwards branch, so to really avoid that your functions would be macros that expand inline.
  • Heater.Heater. Posts: 21,230
    edited 2014-09-27 07:25
    Martin_H,

    A return from a function is not a backward branch. Even down at the machine code level a return get's you to the instruction after the one that made the call. It has to else you would just make the call again.

    This is not the issue with functions. One problem is that if I show you a page of source code and claim that it performs a function X then you can read it and analyse it and run it and test it and convince your self that it really does X for all possible inputs. Except of course if in the middle of the source for X you see a call to a function Y. Now you cannot prove anything about X unless you have Y and full analysed and tested that. And so on.

    A common hairy problem is that functions in languages like C often have side effects. Calling Y may change something in X other than the return value. Or perhaps Y keeps some history so every time you call it it does something different.

    All in all composing programs out of functions and classes and methods etc is a sure fire way to make them harder and harder to reason about and test.

    LUCOL composes programs by communicating outputs to inputs, each part is isolated analysable and testable on it's own. It's rather like building circuits with chips. One does not nest chips inside chips inside chips...
  • jazzedjazzed Posts: 11,803
    edited 2014-09-27 07:35
    Heater. wrote: »
    All in all compose programs out of functions and classes and methods etc is a sure fire way to make them harder and harder to reason about and test.


    So following this logic the 24K lines of code C function I once had to maintain should have been a breeze!

    Not!
  • Heater.Heater. Posts: 21,230
    edited 2014-09-27 08:23
    Ouch!

    Not indeed.

    I did not suggest writing all your code in a single module or source file.
  • Martin_HMartin_H Posts: 4,051
    edited 2014-09-27 08:33
    @Heater, by backwards branch I mean that a return from a function inevitably requires the program counter to be lower after the return than before it. This is because you can call the function twice in a row, so tracking the PC value would show it increasing then decreasing. The only way to avoid this would be inlining all functions so only the final backward branch decrements the PC.
  • Heater.Heater. Posts: 21,230
    edited 2014-09-27 08:45
    Martin_H,
    ...a return from a function inevitably requires the program counter to be lower after the return than before it
    No it does not. The calling function can be in higher memory than the called function in which case the program counter is higher after the return than before it. No reason this cannot happen.

    But this is somewhat beside the point. When analysing and testing source code we are not concerned with where the compiler put code or how it called it or even if the code ends up in line.

    The point about not having loops and functions is to make the source amenable to analysis. Especially not having loops means it is possible to build a compiler that can tell you exactly the maximum run time of your program. Which is very important in a safety-critical real-time system. Something that other language compilers and tools cannot do.
  • potatoheadpotatohead Posts: 10,261
    edited 2014-09-27 12:09
    Heater, that was why I posted it. On the Prop, we are reduced to using specific constructs, often polling loops, and we often compartmentalize things, and both of those tend to make complex, interacting things, parallel things simpler to accomplish, or at the least, simpler to understand.

    Carmack, with all his real time graphics wizardry seems to have arrived at some similar realizations, in his context of course.

    You got mentioned because of the simple, avionics system being mentioned, as you have here many times.

    I've been off on a few adventures, and am currently reading and exploring some aspects of programming for my own enlightenment, and just thought this one worth a share and potentially brief discussion.

    To me, it's a tool. If analysis is difficult, perhaps coding for analysis makes more sense. It won't always, but it can, and that's a nice realization. Put another way, limiting the set of choices can improve the perception of the path to an effective solution.
    The point about not having loops and functions is to make the source amenable to analysis. Especially not having loops means it is possible to build a compiler that can tell you exactly the maximum run time of your program. Which is very important in a safety-critical real-time system. Something that other language compilers and tools cannot do.

    Yes. I strongly suspect we will be thinking in some of these ways on the P2. Of course there will be loops, but there won't necessarily be cycle exact things, and we will want to think about maximums and analysis.
  • evanhevanh Posts: 15,922
    edited 2014-09-27 15:01
    Ladder-logic has same basis except it's traditionally pictorial to match it's heritage of electrical wiring. IMHO, Everyone should learn it at school before being taught any procedural languages.
  • Martin_HMartin_H Posts: 4,051
    edited 2014-09-27 15:20
    Heater. wrote: »
    No it does not. The calling function can be in higher memory than the called function in which case the program counter is higher after the return than before it. No reason this cannot happen.

    Heater, my point should be independent of the layout of the binary. I said that as soon as you called a function twice the PC will end up lower. This is because the entry to if the function is lower than the start. But in your example the PC is lower at function entry than before the call.

    But when you say no backwards branches you really mean no loops other than the main one.
  • Heater.Heater. Posts: 21,230
    edited 2014-09-27 15:36
    Martin,

    I don't follow you. If I have:

    label1:
    doSomething();
    doSomething();
    label2:

    My program counter is normally higher at label2 than it was at label2.

    Or I might have:

    int doSomething1 () {
    doSomething2 ();
    }

    doSomething1 ();

    Same is true if I put the labels in.

    Thing is the value of the program counter is not important. If indeed there is one at all. What if my processor has a PC that runs backwards?!

    Yes, no loops other than the main one. Except in such systems there is no main loop in actual code. Everything is triggered by an interrupt. Say every 10ms. In this way the real-time aspect is maintained.
  • Martin_HMartin_H Posts: 4,051
    edited 2014-09-27 16:20
    Treat the computer like a black box which displays the PC on a display. Ignore an esoteric architecture for now.

    No loops but the outer loop means that except for function calls the PC always increases. At the bottom of the loop the PC is reset to the top of the loop. The exception would be functions calls which can alter the PC to a significantly higher or lower value. Upon return from the function the PC will increase to the next instruction. But compared to the PC within a function it might be higher. If the program again called the subroutine the PC would be at the entry point which is lower than the exit point.

    To someone watching the display, the PC changing in this way might be a loop other than the outer most one.
  • evanhevanh Posts: 15,922
    edited 2014-09-27 16:53
    Agreed with the observation of static allocations and 100% processor utilisation as typical traits.

    The main point is continuous multifunction. Tight timing constraints is a subset of this. There is two coding methodologies: Either diverse tasks (State machines) or the master control program (Glue logic).

    The common batching processing environments fail to achieve either of these as many of it's functions will stall, waiting on a condition. Diverse tasks in an appropriately supportive OS can be used in conjunction with batching to bring them to life. However, this is risky as it brings with it lots of curly stuck states.

    The logic approach, which doesn't need much of an OS at all, is often criticised as being both unwieldy on large projects and too primitive. The unwieldiness can be addressed by cutting the logic up into sections and even layering it's functionality. It's primitive nature, as already noted, is an asset. Keep it simple stupid. Virtual constructs of things like loops can be created from logic+counters for example, it's a just a matter of good labelling practices. Importantly, these constructs don't stall the program execution.
  • evanhevanh Posts: 15,922
    edited 2014-09-27 17:11
    CNC controllers, funnily, have both a batch processing part and a logic part. These controllers are semi-generic in that much of the I/O is definable as to what sensors and actuators they are connected to. They aren't built for one specific machine nor for one manufacturer.

    The operator supplied G-code runs as a sequential series of instructions to be executed in an explicit order, a batch so to speak. But there is also another part, the machine customised part, of the controller. It is a chunk of logic the defines the machine specific rules. The operators don't modify it but it's still there and can be edited if desired.
Sign In or Register to comment.