Shop OBEX P1 Docs P2 Docs Learn Events
RL4000: 4-Bit CPU trainer - Page 3 — Parallax Forums

RL4000: 4-Bit CPU trainer

13»

Comments

  • Heater.Heater. Posts: 21,230
    localroger,
    The 8088 and 8086 were very forward-looking chips for their time.
    I have to disagree there.

    As I said above, Intel had already invested a couple of years of development into the i432. Which was a very forward-looking thing. When that did not work out the 8086 was a quick hack job, in a matter of weeks, to keep up with the coming 16 bit market.

    Meanwhile IBM was eyeing the 32 bit Motorola 68000 for it's PC idea. As far as I can tell the decision to go Intel was about second sourcing and basic availability. Which is why we have AMD. Thank God.

    Anyway, the processor designer guys at the company I worked for at the time were dismayed at the backwardness of the first IBM PC when they got one to pull apart.

    I'm surprised that you say "the instruction set encouraged program bloat,". The Intel x86 instruction set is CISC. It follows the old idea of making life simple and compact for the assembler level programmer. Without much thought as to how compilers may deal with it.

    Yes, being selected by IBM was magic.
    And while they weren't object-code compatible with the 8080, they were "source code compatible,"
    Not really. One could not assemble 8080/8085 code directly for the 8086.

    At the time I had a nice project to move 10's of thousands of lines of 8085 assembler to 8086. Intel provided a tool, conv86, to do that. The resulting code was twice as big and ran half as fast, even if the 8086 target was running at twice the clock rate!

    The problem was that 8086 set some status flags differently to 8080. So the tool put a ton of extra instructions in there to handle the difference. Mostly they were not needed for correct operation of your code and you could turn this "accurate flags" feature off. But then you had to inspect all your code to find out where it mattered!

    But the kicker was hardware compatibility. I made a daughter board carrying an 8088 and it's associated clock circuitry and plugged it into the 8085 sockets of the boards that company was making. It worked! BOOM they could upgrade to 16 bits with almost no hardware changes.

    All in all, I do agree. That endless chain of sort of backwards compatibility has been a huge win for Intel over the years.

    On the other hand it upsets me. If all programs were delivered as source code then they could always have been recompiled for whatever architecture that came along. This whole deal held us back for a long time. For example it was ten years and more between Intel making a 32 bit processor available and people having a 32 bit operating system that can use it.

  • Cluso99Cluso99 Posts: 18,069
    Intel chose Intel because they were smaller than others so they could buy a chunk which they did. IIRC IBM bought 15% of Intel.
  • RaymanRayman Posts: 14,589
    edited 2018-03-16 18:58
    Somebody just reminded me of the trainer from the 80's 70's that inspired me to make this:

    https://commons.wikimedia.org/wiki/File:COM-TRAN10c.jpg

    I think I found the original manual I used here:

    http://bitsavers.informatik.uni-stuttgart.de/pdf/digiac/com-tran_ten/KDA-3032_Digiac_COM-TRAN_TEN_Training_Jun81.pdf

    I remembered this being a 4-bit computer, but I see now it was 8-bit. Actually, I think it is way more advanced than this little 4-bit trainer...
  • RaymanRayman Posts: 14,589
    edited 2018-03-16 18:00
    Here's a better picture:
    https://www.linkedin.com/pulse/digiac-com-tran-ten-fond-memory-steven-stewart

    I see now that it used the light behind the film approach for the display, like Peter recommended...
    Has me rethinking my display...
  • That thing is really cool.
  • Heater, when I said x86 were forward-looking I didn't mean they worked well; I meant they worked poorly because they were trying to do something the industry wasn't really ready for yet. Disconnecting the memory access from the ALU was a radical departure that made it possible to offer 8-bit and 16-bit memory bus chips with nearly identical functionality, and later to expand that same architecture in all sorts of ways, but it truly performed poorly compared to other dedicated 8- and 16-bit designs. It also rendered the time-honored practice of cycle counting completely useless for precision timing.

    Ten years later those architectural decisions would look brilliant, but in 1982 they just looked like an enormous waste of memory and clock cycles.

    Ridiculous as it sounds the x86 were indeed billed, and quite heavily, as "source code compatible." Again, the problem was that x86 were looking forward past 8080 features that wouldn't translate so well to a wider bus world. I did use many programs that were ported almost directly from CP/M to MSDOS once the platform was established. The big clue was that they couldn't use more than 64K for $function no matter how much real RAM you had. Original app writers quickly figured out how to use segment registers creatively for better performance, but the converted stuff would have to just about be rewritten to take advantage of more RAM.

    I also don't get where you heard that the x86 was "a quick hack job." It was anything but, being years in development when IBM swooped in. But being so far ahead of its time, it wasn't really practical for anything compared to other existing solutions until it got the IBM magic to make it viable. It had only ever been used in a couple of commercial projects, and those niche, at that point. But Intel had been working on the idea almost since the 8080 went live.


  • Cluso99Cluso99 Posts: 18,069
    edited 2018-03-19 06:45
    IBM used the 8088 first up. I am not quite sure of the main differences but it was a cut down 8086 for a cheaper configuration.

    The clones upped RAM to 640KB. It was also the clones that first used the 80186.

    Not sure if there were any PCs that used the 8086 chip.
  • Heater.Heater. Posts: 21,230
    localroger,

    I mostly with you there. Especially your conclusions about performance.
    Disconnecting the memory access from the ALU was a radical departure that made it possible to offer 8-bit and 16-bit memory bus chips
    I don't know the details of the internals but it seems to me Intel must have started that with the 8 bitters. The 8080 and such multiplexed address and data onto the same pins which I think dictates decoupling bus access with a Bus Interface Unit. The 16 bitters continued that tradition, but now that could multiplex 16 data bits onto 16 address lines. Was that innovation or just a hack to save pins ?

    Certainly the 8088/86 had some kind of pipelining going on, which I guess may have been a first.
    Ridiculous as it sounds the x86 were indeed billed, and quite heavily, as "source code compatible."
    Not ridiculous at all. There was a one to one mapping for 8080 assembly language instructions to x86. For every register, operation and addressing mode of the 8080 there was an equivalent in the x86.

    Of course this was not binary compatible and even the assembler mnemonics and such were different. But Intel provided a tool, conv86, that translated your 8080 assembler programs, line for line, into equivalent x86 programs.

    I used conv86 back in 1983 to translate hundreds of thousands of lines of 8080 assembler on a couple of projects. It sort of worked. Problem was the resulting binary was almost exactly twice as big because the ISA had bloated out. As a result the code ran almost exactly half as fast on 8088 as it did on 8080! You really needed to up the clock rate of your circuits and make use of 16 bit instructions to get the performance back to where you were.
    I also don't get where you heard that the x86 was "a quick hack job." It was anything but, being years in development
    That is a surprising thing I learned a few weeks ago while listening to an old Intel hand, sorry I forget his name, describing developments at Intel at the time. It was one of those oral history presentations/discussions made by the Computer History Museum, Mountain View, I believe. I found it on Youtube. Sorry it's unlikely I can find it again.

    The story is that in 1975 Intel had put together a huge team to design their vision of the future, the i432. That project was nowhere near completion some years later. It was also proving to be very big, required two chips, slow, and massively complex to use. The i432 eventually launched in 1981, was a huge flop and canned. I once had a data book for it, did not understand it at all!

    Meanwhile Motorola, Zilog and Nat Semi were steaming ahead with their 16/32 bit designs. Intel went into a panic, seeing that they would fall behind, and started the x86 project in 1976 as an emergency stop gap measure. The instruction set and high level architecture was rushed out in three months by one and a half guys! Sure it took longer to refine the details and get it to production.
    But being so far ahead of its time... Intel had been working on the idea almost since the 8080
    What I learn, from the horse's mouth is that this is not so.

    In fact, in another oral history of the Computer Museum another Intel big shot, who was chief of marketing or some such, describes how they all knew the x86 was "a dog", his words not mine, and they had to dream up imaginative ways to present it in a good light. He even said that he "could not do those presentations today with a straight face".

    What we learn from this is that x86 proved to be a huge success not because it was new, radical, advanced, performant. Quite the opposite, it was an incremental upgrade from the 8 bitters. You could drop an 8088 into an old, proven, 8 bit computer board design with almost no changes (Which I did on a couple of occasions) You could use all your familiar peripheral chips and busses. Then on the software side there was great compatibility as I described above.

    Decades later, that incremental upgrade phenomena comes into effect again with the AMD 64 bit extension of x86 and the failure of Intel's new and radical Itanium.

    Sorry I can't find the video links to back all this up but one can piece together the facts of the case from Wikipedia:
    https://en.wikipedia.org/wiki/Intel_8086
    https://en.wikipedia.org/wiki/Intel_iAPX_432



  • Heater.Heater. Posts: 21,230
    Hey, despite my seemingly disparaging diatribe above I don't like to pee on those pioneers of the mircro-processor. Brilliant guys and a great effort by all of them. With amazing results.

    Especially since, after 50 years, I have yet to design/build even one processor of my own. Which is odd since I started to fantasize about constructing some kind of computer when I was about 10. Not that I had any idea what or how and there were certainly none around at the time. (I'm not going to count my emulators and use of FPGA soft cores.)

    So, all cred to Rayman and his Nibbler.
  • TorTor Posts: 2,010
    Cluso99 wrote: »
    Not sure if there were any PCs that used the 8086 chip.
    Clones did, at least. The Olivetti M24, for example (I think the M24 was sold as AT&T 6300 or something like that in the US)
    I remember the M24 as a much nicer machine than the original IBM PC.
    (I now recall that I also had a British Advance-86 - it's probably still in storage somewhere - a low-cost, plastic thing not at all close to the M24. But it did have an 8086 chip.)

  • Heater.Heater. Posts: 21,230
    The best IBM PC clone I ever saw was made by Northern Telecom.

    It was much faster than the original IBM PC. It had a gorgeous high res monochrome display. Like a Mac, not the crude green screen of the PC. It had networking built in.

    I did some work on the BIOS for that machine for Northern Telecom. And the we were getting Windows 1 point something running on it.

    Except it was not a Northern Telecom machine, it came from an unknown company in Finland, Nokia. Northern Telecom rebadged it and added some software tweaks.

    It was an Intel 80186 machine.

    After that came the "clones". They took the PC and ran with it in their own direction. The most famous for me being the Compaq 386. Finally a machine one could write 32 bit software for in a huge flat memory space!








  • If the x86 core was developed in 1976, that wasn't very long after the 8080 was introduced. It wasn't practical enough to actually use for anything until several years later. The fact that the instruction set and organization were whacked out over a short time interval was not exceptional in those days; in Soul of a New Machine the new mainframe's microcode is written in a weekend. But because it was a radical design it required a lot of testing and validation and tweaking. Contrast with the MOS6502, which was literally designed by hand at the silicon mask level and sent to directly to fab.

    The x86 memory architecture was a big departure from even the address bus multiplexing done by chips like the 8080; that was synchronized with memory read and instruction decoding. The x86 memory pipeline was completely asynchronous to the decoding and execution of instructions. The 8088 had a four byte pipeline, the idea being that it could run continuously no matter what instruction the CPU was busy decoding. (Remember, some instructions were quite spendy; MUL was over 100 cycles and others were four bytes with opcode and argument.) A run of instructions that didn't require alternate memory access could pile up in that pipeline and be cleared quickly. The original x86 was not meant as a 32-bit architecture; it was 16-bit through and through, although some of us ragged them mercilessly for calling an 8088 with 8-bit memory bus a "16-bit" machine. It was internally but it was 8-bit to the actual hardware.

    In those days Intel apparently had a big problem with biting off more than they could chew.

    The 80186 was meant for embedded designs and had some features that made it incompatible with any PC-compatible design. It was used for a few MSDOS but not-IBM machines because it was meant to reduce component count, but those fell out of favor quickly when the compatible clones began to appear.

    IBM also messed up Intel by using the reserved interrupt Intel would go on to use for streamlined array bounds checking in the 80286 and later models for the print-screen trigger. This made the interrupt handlers for those purposes interesting to maintain.
  • Heater.Heater. Posts: 21,230
    edited 2018-03-20 06:30
    localroger,

    I think you will like this document "Performance Effects of Architectural Complexity in the Intel 432."
    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.5969&rep=rep1&type=pdf

    Actually, I think I said above somewhere that I did not understand the i432 databook back in the early '80s. I wondered if it was really so hard or was it so that I was really stupid?

    I cannot find that databook today but I did find the above document. Good grief, it's still incomprehensible, none of us young guys hacking 8/16 bit microprocessors in hex would have any idea where to start with that thing!

    Neither did the compiler writers of the time apparently.

    Anyway, in that document we learn that the decoupling of memory access from instruction execution was in place, and extreme, before the x86. Instructions could be "bit" aligned in memory!

    Bottom line is that Intel had to russel up the x86 in a very short time, with few resources, after investing heavily in the i432, to fend off the Motorola, Zilog and other 16/32 bit designs that were reaching fruition whilst the i432 was bogged down in complexity.

    The 186 was great. It had some nice little features to make embedded design easy and was low power at the time. (At least by the time I got to work with it in a battery powered wireless communications system)
  • Heater, I have written way more 80186 assembly code than I like to think about for a certain embedded device which is now dead. Let's have a moment of silence.

    Then I re-wrote a lot of that code for the 80386SX, in flat memory model mode. A lot of that was straightforward but some of it wasn't. In particular they used a 16-bit path to memory so the 32-bit core 386 ran significantly slower in true 32-bit flat mode than it did in real mode.

    Fun times. Now everything is in Lua.
  • Heater.Heater. Posts: 21,230
    Ah, the 386, what joy.

    I was over the moon when I discovered putting a prefix byte in front of regular 16 bit instructions turned them into 32 bit operations. Even when you were running under 16 bit MS-DOS. Made my Mandlebrot set rendering scream along. That program invariably crashed Windows NT, totally, when that came along :)

    Later, it took me a couple of weeks of research, trial and error to write the code required to get an i386 from reset into 32 bit protected mode. Turned out half my problem was that the Intel In Circuit Emulator I was using to step through the code would crash out exactly on the instruction that changed mode.
    That 10,000 dollar ICE cost us more in wasted time than we paid for it!


  • Cluso99Cluso99 Posts: 18,069
    I started a new thread for this OT discussion. Sorry Rayman :(
  • RaymanRayman Posts: 14,589
    No problem with me. I might use this discussion in my book :).

    Actually, it's interesting to think about these old CPUs and how we got where we are.
    It puts the Nibbler in context. In almost every way except one, it's many steps backward. But, in terms of simplicity, it may be a step ahead of anything else...

    I'm not really writing a book, just kidding... Well, I might write a manual for RL4000 one day...
Sign In or Register to comment.