Shop OBEX P1 Docs P2 Docs Learn Events
Prop2 MMU - Page 2 — Parallax Forums

Prop2 MMU

245

Comments

  • mindrobotsmindrobots Posts: 6,506
    edited 2014-10-24 11:46
    Contiki? Hmmm...curious......

    Must NOT Google, must resist the urge to investigate...must avoid the rabbit hole........

    Just a quick glance won't hurt.....runs on STM32 and TI and some big PIC chips...none of those seem memory constrained compared to a Propeller...but what's this? A 6502 version??? Maybe just one or two more clicks here.......
  • m00tykinsm00tykins Posts: 73
    edited 2015-08-03 17:09
    Hello again everyone!

    I realize I've become a bit of a broken record on this (haha), but hear me out... I think I've finally figured out a way to get interrupts and an mmu running on the P1/2.

    Basically, you'd need to use the prop to simulate a single-core CPU. Specifically, it would need to be a transport-triggered architecture implementation (see attachment). The multiple functional units would be simulated by different cogs, while the bus would be replaced by the hub. The propeller's assembly would be replaced by a single move instruction, making it a one instruction set computer (how's that for RISC? :P). I'll give an example:

    One cog would be the control unit, another cog would be the ALU, and another cog would hold registers. To execute say, an ADD r1, r2, r1 instruction in the source assembly, the assembler would need to translate it into microinstructions (kinda like VLIW). For example, it might be mov r1 ALU i1, mov r2 ALU i2, mov ALU o1 r1. This is where polling comes in. Each cog would poll for its flags to be set by the control unit, and when they are the cog executes its preprogrammed instruction, and then sets an "instruction complete" flag telling the control unit it can move on to the next microinstruction. So it would break down as:

    Control cog: set flag in RAM for ALU cog to read memory location r1.
    ALU cog: polls for flag to be set, when found it reads the location and sets ALU ready flag when operation complete.
    Control cog: set flag in RAM for ALU cog to read r2.
    ALU cog: repeat, set ready flag.
    Control cog: set ALU destination location & flag.
    ALU cog: write result to destination, set ready flag.
    Control cog: execute next instruction.

    In this example, only 2 cogs are used. I'm sure in the final implementation there may very well be more, but there will almost certainly be room for a memory functional unit to implement an MMU. At the very least, interrupts are now possible by having a cog poll for interrupts and set a flag for the control cog to stop microinstruction execution. I'm also sure there is a lot of room for optimization here, as is the case with most VLIW processors. Although there will of course be a (possibly very large) performance overhead, with clever programming it can be minimized until it's usable. For example, by making the simulated processor an accumulator/stack/belt machine, some microinstructions can be eliminated because the source and destination registers would be implied without flags needing to be set.

    I'm including a picture showing a simple example of a conventional TTA CPU. For further info there's a wiki article.

    Thanks for the patience with my questions everyone, I really appreciate all your help! :D

  • jmgjmg Posts: 15,173
    edited 2015-08-04 09:52
    m00tykins wrote: »
    .... At the very least, interrupts are now possible by having a cog poll for interrupts and set a flag for the control cog to stop microinstruction execution.



    Did you miss that P2 now has genuine interrupts ?

  • I don't see any real benefit on the whole for MMU.
    We already have MMU basically, 16 separate Cores/memory available.

    On a cpu with a handful of cores and MB/GB memory there is need of MMU.
    On a 16 Core chip with each having its Cogram and a pool of only 0.5MB, don't see the need.
    You are already going to be down and dirty in the nitty-gritty figuring out what Core is going to use what memory, etc. This only seems useful for the OS 'App' type of model, which I just don't see ever being useful/catching on.
    If people want a small computer, they get a Pi2 with real Cores, memory, Linux/Alt OS, full IO.
    You 'can' make the Prop2 similar, however I don't think it would ever get more than a few people really interested. But its your time to spend.



  • jmg wrote: »
    m00tykins wrote: »
    .... At the very least, interrupts are now possible by having a cog poll for interrupts and set a flag for the control cog to stop microinstruction execution.



    Did you miss that P2 now has genuine interrupts ?


    It does, in a manner. However, it is not suitable for an MMU. The P2 simply inserts a LINK instruction into the regular instruction stream to redirect the flow of code. Any instructions that are already in the pipeline will be executed (including the one that would presumably cause a memory access fault).
  • Heater.Heater. Posts: 21,230
    edited 2015-08-04 14:02
    So I was thinking....

    Why did anyone ever need an MMU in the first place? As far as I can tell:

    1) Primarily because RAM was expensive and memory spaces small. An MMU makes it appear to your software that there is more memory space that there physically is. That makes writing large programs and using large data sets much easier.

    2) Importantly process isolation. When you have multiple processes and perhaps multiple users it's nice to isolate them from each other so that one rogue process can not crash the entire system. Especially if unknown code is being loaded and run.

    Seems to me that 1) was never really necessary. Back in the 1980's I worked on a couple of huge code bases that operated on big data sets. Far ore than would fit in the memory of a PC at the time. PC's had no MMU then and MSDOS certainly could not do anything with them if they had, so how was that possible? They made use of "overlaying linkers" that would insert code into your program that would pull code off disk when you called functions that were not present in RAM. This was quite a good solution as all the memory swapping was under your control, not some random thrashing by the OS as we have in modern operating systems.

    Turns out we don't need process isolation much either. Just do it in the software. A language like Javascript can isolate all it parts from each other very well, it can load and execute new bits of JS in isolation as well. This has meed possible since forever with languages like Lisp. In modern times languages like Occam, XC, and go are compiled languages that isolate threads without any MMU assistance.

    Anyway, as has been said before a micro-controller does not need an MMU. MMU's only contribute to unpredictable performance. If you really need an MMU use an ARM or whatever that has it. Very small and cheap now a days. Adding an MMU to a Prop will not turn it to an ARM.

  • Heater.Heater. Posts: 21,230
    edited 2015-08-04 14:04
    Posted in error. Ignore.
  • Cluso99Cluso99 Posts: 18,069
    Heater, I want to run an OS on the P2.

    BUT, I don't need or want an ARM to do what I want. And I certainly don't want an MMU.
    I am pretty sure everything I want will be in the P2. Of course there are always niceties that could be added, but that might produce another P2HOT.

    I would like the USB FS helper instruction but that can wait until Chip is ready.
    And I would like to be able to have at least one Composite Color (NTSC is fine).
    And I hope the smart pins gives us some form of basic serialiser and deserialiser.
  • Heater.Heater. Posts: 21,230
    Yep, why not have an OS on the PII.

    What would you like to see in an OS?

    Some would say a Forth engine is all the OS they need.

    The ability to edit code, save/restore it from SD, compile it and run it, would do for me. Preferably in Spin/PASM.


  • jmgjmg Posts: 15,173
    Cluso99 wrote: »
    ... I want to run an OS on the P2.

    BUT, I don't need or want an ARM to do what I want. And I certainly don't want an MMU.
    I am pretty sure everything I want will be in the P2.

    There is info on a possible complete OS [Compiler/Filesystem/Editor/Screen/Keyboard/Mouse/Network] here

    http://www.inf.ethz.ch/personal/wirth/ProjectOberon/index.html

    reports the Compiler Compiles itself in 3s, and total system in 10s, on a 25Mhz CPU clk.
    Total system is Code:51168, Data:48380
    which are in the realm of P2.
    & more in my other thread on Project Oberon.
  • Cluso99Cluso99 Posts: 18,069
    edited 2015-08-05 02:16
    Heater. wrote: »
    Yep, why not have an OS on the PII.

    What would you like to see in an OS?

    Some would say a Forth engine is all the OS they need.

    The ability to edit code, save/restore it from SD, compile it and run it, would do for me. Preferably in Spin/PASM.

    Yes, edit and compile.


    Jmg,

    I already have A P1 OS working, sans edit and compile - just need to include marks P1 compiler.

    Of course, add SRAM and ZiCog and we have a full CPM 2.2 system running :)
    I can even switch back and forth between this and my OS, and transfer files between the CPM and FAT16/32 system via my OS.
  • Heater. wrote: »
    So I was thinking....

    Why did anyone ever need an MMU in the first place?

    Process isolation is my main concern. If you have an OS of any sort connected to the internet, you basically NEED an mmu. I once spoke with the devs of contiki/RIOT (IoT OSes) and asked what they do to prevent exploitation on IoT systems without an MMU. Their answer was just "get rid of all the bugs"... XD

    If you want something that can connect to the internet without getting attacked, you NEED an MMU. Think of the internet as a $5 hooker and the mmu as a condom.
  • m00tykins wrote: »
    If you want something that can connect to the internet without getting attacked, you NEED an MMU. Think of the internet as a $5 hooker and the mmu as a condom.

    What The Hack? MMU manage memory. How it is related to hacking from internet? DoS? Buffer overflows? Sound like nonsense!
  • Heater.Heater. Posts: 21,230
    m00tykins,
    If you want something that can connect to the internet without getting attacked, you NEED an MMU.
    I don't believe this is true in general.

    What you need is a way to ensure that activities running in your machine cannot snoop on or interfere with each other. So that code that is responding to requests coming into by web server cannot see, or change, what I am typing in this editor, for example.

    That does not require an MMU, as the Coniki guys say what you need is reliable bug free software.

    Turns out that achieving bug free software is quite hard to do, especially when working in languages like C and C++. So a band aid fix for the acceptance of the inevitable bugs is to isolate processes with an MMU.

    But consider this. There are many servers, probably most, where many users sessions are being handled at the same time in the same memory space. They are written in languages like Java, Javascript, Go, Erlang and so on. Languages that go out of their way to make typical C like bugs impossible, buffer overflows, stack smashing etc, etc, etc.

    The contention is that properly engineered languages and run times can make processes and MMU unnecessary, even in the face of bugs in the application code or devious visitors.
  • m00tykins,

    To advance you analogy, I think having an MMU is about as good as crossing your fingers as you take the $5 out of your wallet.
  • rjo__rjo__ Posts: 2,114
    mOOtykins,

    I am really not a hardware guy in any sense of the word... so, you are probably going to have to translate what I say and figure out what I meant to say or should have said:) .... that's essence of multi-disciplinary research!!!

    Memory is a hot topic for me right now, because I need to figure out what I'm going to do a few steps down the road.

    For strict P2 purposes, I think an MMU would be overkill. BUT even with 512KB of hub memory, there will be applications out there that would benefit from more. I am planning to use the P2 for camera acquisition... but to process the data, I will either take it off the chip or string some P2's together. Even though the P2 is capable of talking to external memory, that would take a lot of pins. My prejudice is to take it off of the P2's back as much as possible. The P2 is a controller... having it waste half of it's pins on memory means that there are other sensors that it won't be talking to.

    I like the concept of smart memory... where the P2 is an executive controller, telling the memory what to do in a macro sense and letting the memory respond appropriately.... Smart memory could be a dedicated P2 hooked to an external chip(s) or it could be a P1V using on chip(on FPGA that is) memory... hard to say right now:)
  • Heater. wrote: »
    Turns out that achieving bug free software is quite hard to do, especially when working in languages like C and C++. So a band aid fix for the acceptance of the inevitable bugs is to isolate processes with an MMU.

    But consider this. There are many servers, probably most, where many users sessions are being handled at the same time in the same memory space. They are written in languages like Java, Javascript, Go, Erlang and so on. Languages that go out of their way to make typical C like bugs impossible, buffer overflows, stack smashing etc, etc, etc.

    The contention is that properly engineered languages and run times can make processes and MMU unnecessary, even in the face of bugs in the application code or devious visitors.

    Part of what you're saying is technically true, that yes, a perfectly engineered system running only perfectly engineered code wouldn't need a MMU. But noone has yet made anything close to such a system in practice. The only sizable "bug-free" system ever made is the recent sel4 formally verified microkernel (see http://sel4.systems) (btw it also requires an mmu :P ).

    As for your last statement, though, I'm not sure I follow... How could a system without an mmu possibly run an untrusted program? Without an mmu, an untrusted program could write to *any* space in memory, including the kernel, so I don't see how a properly engineered runtime could prevent this. Security requires sandboxing, and the only way to make decent sandboxes is via an mmu. How can a computer run untrusted code if the kernel runs with the same privileges as the third-party application?
  • jmgjmg Posts: 15,173
    m00tykins wrote: »

    As for your last statement, though, I'm not sure I follow... How could a system without an mmu possibly run an untrusted program? Without an mmu, an untrusted program could write to *any* space in memory, including the kernel, so I don't see how a properly engineered runtime could prevent this. Security requires sandboxing, and the only way to make decent sandboxes is via an mmu. How can a computer run untrusted code if the kernel runs with the same privileges as the third-party application?

    Good point, on that yardstick a Prop has 9 or 16 kernels, all safely isolated in their own COG, and HW means no one else can interfere in that Memory.

    Of course, there are still plenty of ways to take down a system, even if you cannot get at its memory, if you are hell bent on doing do.
  • Heater.Heater. Posts: 21,230
    edited 2015-08-08 06:29
    m00tykins,
    ...a perfectly engineered system running only perfectly engineered code wouldn't need a MMU. But no one has yet made anything close to such a system in practice.
    Exactly.

    Back in the day the company I worked for got a big fat document from Intel, under non-disclosure agreement, that described all the known bugs in the 286 micro-processor. One of the key point's of the 286 was it's virtual memory, MMU, protected paged memory, segments, privilege levels, etc, etc. Most of the bugs in that document described ways in which the protection mechanisms could be circumvented by rogue software!

    I have not been inclined to trust such things since.
    How could a system without an mmu possibly run an untrusted program?
    Easy.

    1) We start from the idea that we are never going to trust and run anything unless we have the source code.

    2) We only accept programs written in safe languages. By "safe" I mean languages that do not allow for pointers to any memory location and so on.

    Consider the following example in Javascript:
    (function userProcessesOne () {
        console.log("Hello!");
    }());
    
    (function userProcessesTwo () {
        let x  = 0;
        setInterval (function () {
            console.log("x = ", x);
            x += 1;
        }, 1000);
    }());
    
    // Evil hacker virus trojan bad stuff gets loaded here:
    // ...
    // ...
    // Well, I would show some if could think of any JS to put here that could mess with the above code in any way.
    
    You might baulk at the JS idea but it is a very secure language. There are alternatives like Go and Ada and so on. If you like we can compile your C/C++ to JavaScript and run it as shown above in a totally safe way and with most of the performance of compiling to native binary.

    How does this relate to a micro-controller like the PII ?

    Well, clearly we may not want to be compiling from source on the device itself. But if Spin were a safe language then I could take any source from any evil hacker, compile it on my PC and run it on the PII in a totally isolated way.

    No MMU required.






  • potatoheadpotatohead Posts: 10,261
    edited 2015-08-08 07:03
    and no OS either, and I am eager to see that idea play out on the P2.

    Doesn't mean people can't have one. Some will for sure. But they won't need one, and that is what I find very interesting and compelling.


  • Heater.Heater. Posts: 21,230
    jmg,
    ...on that yardstick a Prop has 9 or 16 kernels, all safely isolated in their own COG, and HW means no one else can interfere in that Memory.
    Not really. Only if they are confined to that 512 longs of COG space.

    Any code "isolated" in COG will be communicating to code outside of it's COG via memory in HUB. It's not much use otherwise. That is not protected in any way.

    Rogue code "isolated" in COG can of course read and write anyone else's HUB memory. No protection there.

    Any COG can stop any other COG. No protection there.
    ...plenty of ways to take down a system, even if you cannot get at its memory...
    Yep, allocate yourself enough RAM that the system is starved. On a single processor system us all the CPU time. And so on.
  • Heater.Heater. Posts: 21,230
    Did I mention how a COG can stomp on any pin anyone else happens to be using. Or perhaps claim all the locks?

    If you want process isolation in order to be able to run untrusted software there is a lot more to think about than just an MMU.

    You will of course need at least two execution modes, "privileged" and "unprivileged" or whatever you want to call it, the former managing all that untrusted code that runs as "unprivileged".




  • There are undoubtedly ways to accomplish this.
    How many Cores you'd have left, what would their capabilities be, and total MIPs of the system at the end of it, is probably going to be pretty disappointing.

    Just to throw an idea out however:

    There are dual-cog push/pull objects in OBEX, right?
    Could not one make an 8-Cog object, with each Cog handling one functional unit of the traditional CPU?
    Then they could work in a Token ring sort of fashion
    1 Cog Program execution, 1 for Memory duty, 1 for APU, 1 for IO, etc.
    This is probably the worst, slowest, and least efficient way to start. But interesting from a learning POV.
    However while it might turn a Prop into a 1Mhz 6502 equivilent, it probably is good to set the bar somewhat realistically.

    If you can not trust the code you put into a Cog, then you have to forgo intelligent Cogs.
    Or, load your Cog with untrusted code for example starting at $0FF
    load your MMU at $000, which is the primary control loop
    Have the MMU communicate with other Cogs as to which memory is free, and which is already in-use by others
    Have Hub memory requests redirected through the MMU code to allow/disallow access

    Obviously memory throughput would be drastically reducued, if there even was a way of forcing redirection.
    Right now I don't see a way to do that unless you changed how Cogs actually work, and make Cogs themselves self-contained VM type things?

    Interesting, but not useful that I can see.


  • Heater.Heater. Posts: 21,230
    What you are describing is basically a virtual machine, as in Java, C#, or even Spin, with it's different parts spread over a few COGs.

    We kind of did that with the ZiCog and qz80 emulators for the Propeller and the Zog ZPU engine. Only two parts though, the instruction execution engine and the memory interface/cache engine. This was basically because we wanted a way to use external memory to make up the 64K bytes required for a Z80 machine.

    I don't think splitting the job over more COGs will get one any performance improvement.

    But what about memory protection, sand boxed execution, and Spin?

    I could imagine a Spin interpreter that enforced range checks on every memory access such that an object can never read/write outside the bounds of it's own VAR and DAT space. Similar checks would be in place for pin access and so on. In that way any "untrusted" Spin byte code could be loaded and run.

    Probably a pointless exercise that nobody would use and would only degrade performance.




  • Heater, after I posted I realized a Spin interpreter or something like that would have been a better explanation.

    Splitting into more Cogs would probably do nothing but slow everything down even more.

    Pointless unless there is a world-wide calamity, with Rocklin being the only place to escape unscathed.

  • Heater. wrote: »
    1) We start from the idea that we are never going to trust and run anything unless we have the source code.

    Witness the IOCCC
    Heater. wrote: »
    2) We only accept programs written in safe languages. By "safe" I mean languages that do not allow for pointers to any memory location and so on.

    I'm glad you put the word "safe" in quotes. It's only as safe as the underlying implementation. A brief review of sandbox escaping techniques for java, javascript, flash etc etc will demonstrate.

    This makes me think of the difference in the implementation of Tesla's internal network verses other car manufacturers.

    Tesla: You may only send these X commands via this bridge and they're abstracted away from CAN protocol.
    Others: We only send these few CAN packets in our code and they're all completely safe so let's just connect the two directly and hope no-one notices ;-)

    Similar principle.

    NB: I'm not arguing for an MMU in the prop, I'm just addressing your assertion that languages can be made safe.
  • jmgjmg Posts: 15,173
    __red__ wrote: »
    NB: I'm not arguing for an MMU in the prop, I'm just addressing your assertion that languages can be made safe.

    The problem is also that 'safe' means many different tings.

    I notice in the Oberon links gave in another thread they claim this

    ["Considered extravagant and hardly necessary only years ago, run-time checks are generated automatically. In particular, they cover index range checks and access to NIL-pointers. Due to their efficiency they hardly affect run-time speed, but are a great benefit to programmers."]

    That certainly makes code safer from accidental cross-corruption, but does not make it safe from attack.
  • Heater.Heater. Posts: 21,230
    edited 2015-08-11 05:21
    __red__
    Witness the IOCCC
    I'm quite aware of the sneaky tricks used in that competition to get "rogue" code into a program even at the source code level.

    I did not make myself clear enough. When I said "never trust anything unless we have the source" I was not implying that the source be read by humans and checked for rogue code. That is clearly prone to exploit as the IOCCC shows. Not to mention most normal bugs, see recent SSL bugs for example.

    No, I mean that the program you give me has to be in a form that is amenable to analysis by my compiler and or run time system and subject to the rules of that runtime system. That my runtime gets to decide what runs not your binary instructions running directly on the machine.

    That form could be source, like Javascript, or it could be byte code, like Java. Neither may be actually intelligible to a human but they are verifiable and controllable by the run time.
    It's only as safe as the underlying implementation...
    That is true. No matter if we are talking hardware or software or a combination of both. The recent RAM bit flipping exploits show that hardware is not immune to problems: http://googleprojectzero.blogspot.fi/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
    I'm just addressing your assertion that languages can be made safe.
    Yes my assertion is that a language can be safe. That the safety can be ensured at the source level by the compiler/analyser. We can have run time checks to be doubly sure. And hence an MMU is not necessary. Let's make a safe language now. Let's call our language "SAFE?"...

    In SAFE? you can use variables named "A", "B", "C" through "Z". They can be arrays e.g "var A[10]". SAFE? allows computation with those variables using the normal operators ":=", "+", "-", "*", "/" and so on. SAFE? allows for sequences of such statements. SAFE? allows decision making with a conditional statements of the form "if <condition> then <statements> else <statements> done". There are no goto or loop constructs in SAFE?. SAFE? allows for functions "function F (A, B ) do <statements> done". Functions in SAFE? may only access their own local variables and parameters, they return a result "return R". In fact the top level of a SAFE? program is such a function.

    My contention is that we can define a syntax and semantics for SAFE? that is, by source code analysis, provably safe. That source code analysis would of course be done by a program not a human. You can try and write rogue code in SAFE? but it won't get past either the lexer, the parser, the analyser, the compiler and/or the run time checks.

    You did not actually make any points in your post that refute this assertion. I look forward to hearing them.

  • Heater. wrote: »
    Yes my assertion is that a language can be safe. That the safety can be ensured at the source level by the compiler/analyser. We can have run time checks to be doubly sure. And hence an MMU is not necessary. Let's make a safe language now. Let's call our language "SAFE?"...

    In SAFE? you can use variables named "A", "B", "C" through "Z". They can be arrays e.g "var A[10]". SAFE? allows computation with those variables using the normal operators ":=", "+", "-", "*", "/" and so on. SAFE? allows for sequences of such statements. SAFE? allows decision making with a conditional statements of the form "if <condition> then <statements> else <statements> done". There are no goto or loop constructs in SAFE?. SAFE? allows for functions "function F (A, B ) do <statements> done". Functions in SAFE? may only access their own local variables and parameters, they return a result "return R". In fact the top level of a SAFE? program is such a function.

    My contention is that we can define a syntax and semantics for SAFE? that is, by source code analysis, provably safe. That source code analysis would of course be done by a program not a human. You can try and write rogue code in SAFE? but it won't get past either the lexer, the parser, the analyser, the compiler and/or the run time checks.

    You did not actually make any points in your post that refute this assertion. I look forward to hearing them.

    If I may interject, what you're suggesting is certainly possible (and has been done, e.g. Ada, cyclone, and other high-assurance languages)

    And you don't even need to make a new language to do such automated source code analysis. For example, in the seL4 high-assurance kernel project, the C language was used, but because of partially-automated formal verification, it took less than a decade to ensure there were no bugs in the kernel (AFAIK, for most of the project they were developing the automated formal verification S/W). Development is still ongoing.

    Getting back to the SAFE? language, IMO it's literally impossible to make a powerful "safe" language. All programming languages are mathematically equivalent, because in order to compute any possible problem the language must be turing-complete, and all turing complete languages can be translated into each other.

    If you look at some of the unusual languages that have been invented, such as the OISC language or befunge, you'll see you can get away with a lot before the language is no longer turing-complete. In oisc, there is a single instruction, which is "subtract and branch if </= zero" (subleq). This language has actually been proven to be turing-complete, even though something as simple as "hello world" is more than 20 loc. Still, a C++ to OISC translator has already been made, showing that anything that can run in C++ can also run in OISC.

    So it's basically been done already... But it's waaaay more complicated than you're suggesting, since you need advanced AI to do the formal verification, and even then it isn't completely automated.

    P.S.

    On an unrelated note, I'm not sure if everyone's been understanding what I've been saying so far. Like, when Heater and koehler suggested splitting a single CPU across multiple cogs, that's exactly what I was suggesting in my first "bump" post of making a TTA cpu, which was higher up on the same page.

    Also, FWIW I'm not sure why we're still discussing whether an MMU-less system can be made secure, since this is basically already well known. In a system where all possible code that could run on it is already known and trusted, then you don't need an MMU. There is no such thing as trusted code. Therefore, you need an MMU.

    I'm quite sure I've mentioned the minix OS before, and if anyone has bothered to read up on minix if you aren't familiar with tanenbaum's work already, then you'll see the point he constantly makes about bugs. According to 3 different studies, a satellite planning system has about 6-16 bugs/kloc, an inventory system has about 2-75 bugs/kloc, and the BSD kernel, which is audited ad nauseum by an army of autistic virgins, has about 3.35 post-release bugs/kloc.

    These are all things I've said either earlier in this thread or in previous posts in other threads, and furthermore are easily verified by google/wikipedia. oh well... :D
  • Heater.Heater. Posts: 21,230
    m00tykins,
    These are all things I've said either earlier in this thread or in previous posts in other threads, and furthermore are easily verified by google/wikipedia. oh well...
    What a fascinating debate. I thought you had me there for a moment with that Turing machine cannon ball. But no. There are so many holes, fallacies and logical inconsistencies in your thesis I don't know where to start.

    Give me a while to get my counter arguments lined up...
Sign In or Register to comment.