The Official 6502 Versus Z80 Religious War Thread.
Martin_H
Posts: 4,051
A conflict has been brewing since the dawn of the microcomputer era. Which 8-bit processor was the most influential and which one really shouldn't have been built in the first place.
In one corner is the MOS 6502 which was used in the Apple, Atari, Commodore, and BBC Micro. Neat computers with color graphics and sound. The legacy of these machines is the modern multimedia computer that can be used for creative purposes.
In the other corner is the Z80 which was used in a bunch of CP/M machine and the TRS-80. Boring computers with amber or green monitors used for business or terminal emulators. The legacy of these machines is the modern business desktop that can be used to run Excel or fiddle endlessly with PowerPoint presentations.
Frankly it's no contest. The 6502 wins in the first round.
In one corner is the MOS 6502 which was used in the Apple, Atari, Commodore, and BBC Micro. Neat computers with color graphics and sound. The legacy of these machines is the modern multimedia computer that can be used for creative purposes.
In the other corner is the Z80 which was used in a bunch of CP/M machine and the TRS-80. Boring computers with amber or green monitors used for business or terminal emulators. The legacy of these machines is the modern business desktop that can be used to run Excel or fiddle endlessly with PowerPoint presentations.
Frankly it's no contest. The 6502 wins in the first round.
Comments
C.W.
Only problem is I don't have a position on these two to make an argument:(
For the sake of something to argue against I'm going to disagree with your observation that:
6502 -> multipmedia, creative -> fun and joy.
Z80 -> CP/M -> boring green screens and Excel.
This so obviously wrong for many reasons:
1) The for runner of Excel and all that other boring business software was the VisiCalc spread sheet. Which was developed for the Apple II first. Which was, wait for it, a 6502 machine!
2) It ignores all those MSX machines with nice graphics and sound and so on.
3) The Z80 was used in a lot of Arcade games machines.
So what about the "Which 8-bit processor was the most influential".
I propose the result is not in on that one yet.
Firstly the Z80 was a derivative of the 8080 from Intel. As such it was a dead end with no subsequent influence.
The later 16 bit Intel 8086 was an extension of the 8080 architecture. It was backward compatible at the assembler source level. (With a bit of massaging with conv86). It was almost backward compatible at the hardware level. I managed to up grade an 8085 board my plugging a little board into the 8085 socket containing an 8088 and a couple of support chips. We soon had an 8088 board in production that was mostly the same circuitry as the 8080 version.
The Intel PC architecture is a direct descendent of the 8080.
Meanwhile, the 6502 was basically a derivative of the Mororola 6800. It was used by Acorn in their Acorn Electron and BBC Micocomputer machines. When they wanted to move forward they rejected the 16 bit solutions from Intel and Motorola and ended up leapfrogging everybody by designing their own 32 bit processor: The Acorn Risc Machine.
Does that ring a bell? "Acorn Risc Machine" = ARM which is the architecture now in pretty much all mobile phones and tabs and billions of embedded systems.
The ARM design, I have no doubt was influenced by the 6502. That's the whole lean and mean "reduced instruction set" idea. As opposed to the mindless addition of hundreds of new instructions to the 8080 that created the Z80.
So the we have it as far as influence goes. In summary:
6800 -> 6502 -> ARM -> phones, tabs, embedded systems and Raspberry Pi. Yum yum.
8080 -> 8086 -> IBM PC -> all the business and boring stuff. Oh and lot's of games as well.
Oh and Apple Macs, famous for those multimedia and creative types are all Intel based now.
Be happy that I used the 'rubber keyboard' model, and not the + or 128K models...
The Z80 ran at 4 or even 8MHz, and with a normal instruction taking from 4 to 12 cyles, yeah, it qualified on the MIPS scale of processing power, unlike some that had trouble hitting 2MHz...
TWO full sets of registers; A+Flags, B+C, D+E, H+L, A'+Flags', B'+C', D'+E', H'+L', indexing registers IX and IY and the Interrupt Vector made interrupts and data handling a breeze!
You switched between the two main banks using the EXX instruction. This made for VERY FAST interrupt routines.
The Index registers allowed you to do stuff like LD A, (IX+14) and load the Accumulator with what is found at offset 14 from where the IX register points.
You could also treat the register pairs as 16bit registers in some operations, so that LD A,(HL) uses the HL register as a pointer. Doing LD (HL),13h is also possible, writing 13h to wherever the HL register is pointing.
Fancy 16bit math?
ADD HL, DE maybe?
All that is stuff that the 6502 can't come close to doing...
Then it starts getting fun with Block MOVe commands such as the LDIR (Uses HL as a pointer to where to move from, DE points to where to move it, and the DE register tells how many Bytes to move. ) and variations.
There's a Block Search command, too.
Both CPUs can handle 256 IO ports...
Does the 6502 have INR and OUTR commands?
Or doesn't throughput matter that much?
Some say that the Z80s lack of a built-in serial port is a hindrance and makes it weaker than say, the 8085, but with fast interrupts, it can bit-bang just fine. Also, freeing those pins meant that the Z80 could have completely separate Data and Address pins, unlike the 8085 and similar which has to waste a clock cycle just to latch out 8bits on the shared part of the bus for every read or write to memory.
There's never been any doubt as to which was the best CPU, just which was the more popular home computer.
Incidentally, most such 'wars' are started by 6502 fans... Can't imagine why...
You have of course highlighted why the Z80 was doomed.
Zilog took the 8080 and added a billion handy instructions for moving blocks, twiddling bits and so on. And then there were the addressing modes. Mostly that was there to make life easier for the assembler language programmers.
Problem was the world was moving on, systems were getting bigger, high level languages were coming in to use. Compilers did not make use of most of those billions of fiddly little instructions. It was pointless.
Meanwhile the 6502 was built to be small and cheap. It minimized the instruction set as a result. Those ideas about minimalism led Acorn to design the ARM.
That brings us to the real religious war of "Complex Instruction Set" vs "Reduced Instruction Set". CISC v RISC. Which raged for decades and probably still does.
http://visual6502.org/
My favorite discovery about the 6502: It uses "might makes right" bus gating. That is, in numerous situations where one signal source or another might need an internal bus, instead of gating the correct signal in and the wrong signal out the correct signal source was given BIGGER TRANSISTORS so as to override the interloper. Genius or madness, this is not a strategy that could readily be scaled to higher levels of integration like 6800 and 8080, which led to the 68000 and 8086 in similar not quite compatible ways.
While the Z80 and 8086 were forks in the upgrade path of the 8080, and we know where the 8086 went, the Z80 was not the end of its own line. The original 16-bit Z800 failed in the marketplace but the 24-bit (!) eZ80 introduced in 2001 was quite successful.
@Heater, there's merit to your argument that the 6502 influenced the adoption of RISC chips.
Those Visual6502 guys are amazing. Here they are reversing engineering the Sinclair Scientific calculator: http://files.righto.com/calculator/sinclair_scientific_simulator.html
The first single chip calculator in the world and all that scientific functionality in 320 instructions!
I would not say that the 8086 was a "fork in the upgrade path of the 8080". The 8086 was in the direct lineage from 8080 to modern x86 machines. The Z80 was the fork that withered and died. I know it lived on in the embedded world it was certainly not in the spot light like x86, ARM, PowerPC, Spark. MIPS etc.
The Z80 was a soupedup 8080. Its main claim to fame was the Dram interface onboard that provided larger memory/price. Its interrupt vectooring was also popular.
The 6502 wasa rehashed 6800. Itsmain claim to fame was itscheap price.
The Z8 was a nice micro, particumlarly when paired with the Z8530 dual scc. It hasalot of registers, all acessibleby theinstruction set. It wasused in a lot ofmodems, including ours.
I originally designed our AT command modems using a 68705. When I later changed to the Z8, I converted thecode in less thana day including writing the turbo pascal program to convert thecode .
The 68000 was a farbetter chip thanthe 80x86,but the IBM PC used the 8088/80x86 and set intel on avery successful path. The Lisa, Mac and Amiga all used the 68000.
But no micro has given me so much pleasure to program like the P1. The only other computer that gave me tnhis much pleasure was an Friden/Singer/ICL System Ten & 25. And they are remarkably similar in many ways - hub and cog ram, multiprocessor (20 time sliced partitions on the mini), double operand instructions, no accumulator, jmpret instruction, assembler (originally 15 instructions, later ~27.
I was on a team that ported a business operating system to the Z80.
It used a multi-user operating system called OASIS, developed by a guy named Tim Williams I think. The IBM PC wasn't out yet. I bought my first home computer, an Apple II, along with Microsoft's version of Basic so I could play around with Basic since OASIS used a pretty sophisticated (for Basic) language.
We were running up to 4 users on the Z80. You could have up to 3 terminals or printers along with one terminal for control.
OASIS basic was an advanced version of basic using subroutines without the dreaded GOTO. It used a proprietary file system that included an indexed/sequential database with query capability.
I was in charge of writing business software for the system. Later, I integrated systems using Altos computers and the OASIS operating system and wrote custom software for them. At that time, the competition was DOS or very expensive mini computers.
After a few years of that a company I had sold single systems to outgrew them, hired me and we bought a system that had been developed (darned if I can remember the company) that had any number of processor boards, each with it's own onboard memory and Z80 processor, hooked to a backbone controlled by another Z80 and a large memory bank. Our particular system had 24 boards. Each board could handle 4 "users" (printers, etc.).
About that time, Microsoft came out with windows and we replaced our dumb terminals with PC's running Windows and terminal software so we could use Word, Excel, etc., as well as our business software. One of our "terminals" was a MAC - we used that for graphics stuff. We had all of the usual accounting software, payroll and barcode controlled process and inventory control. One "terminal" was also dedicated to UPS shipping.
I also sold a few Altos systems to video rental stores for use as POS Terminals using barcode for checkout, checkin and inventory control.
Of course, it wasn't too long before Microsoft came out with MS Server, SQL Server and Oracle came along and everything changed. But we filled a short, but exciting time doing real work with very inexpensive microprocessor based business systems.
So, I have no particular allegiance to either of the processors, I think they both had a huge impact on personal and small business computing.
The long dark winter of the PC era was incredibly boring by comparison. I think the last thing that fired my imagination was the Inmos Transputer.
Things perked up with the ARMs and mobile phones and tablets at least bringing some diversity and leading to fun toys like the STM32 the Raspberry Pi, the Beagle bones and so on. And thank God there was the Propeller.
I wonder if kids of today will be reminiscing over their first iPhones or Androids in thirty years time?
That most programmers were too lazy to learn the ins and outs of the CPU they were working on?
Actually, the complete official instruction set is 138 instructions, but with all the variations on registry use and such, you end up with something close to 700. (They added two prefix Bytes)
Even if you only used the 'non-prefixed' instructions, it's very capable.
Most compilers for the Z80 were actually created for the i8080 and only minimally adapted for the Z80. So yeah, there was a problem with those tools.
And when a lot of the design work on those computers were done with cross-platform tools that wasn't even built to handle the unique qualities of the CPU, you suddenly ended up with less than stellar results.
Add a bit of clutz and suddenly you CAN'T use some of the functions at all.
The Z80 has a special handler at 66h for NMIs..
In the Speccy, the routine checks a memory location, and if the contents are ZERO, that value is used as a CALL address...
In other words, the only thing it can do is reboot the computer. This is generally believed to be a remnant from the debugger used during development, and someone forgot to change a Z to NZ when going 'Gold'
Where the Z80 excels is in embedded systems needing 'a bit of grunt' or where the programmers are allowed to use the extended capabilities.
It's pretty easy to handle a Floppy drive if you have 'DMA' type instructions like the INIR and OUTIR, and I know that it has been used as such in larger computer systems. (Found one in an decommissioned Norsk Data ND-5500)
http://en.wikipedia.org/wiki/ND-500
OK, having an 8bit CPU for Floppy controller in a 32bit computer(series was called 'Samson') with a 16bit 'housekeeping' controller named 'Delilah' may not be the best example...
Might makes right is a pretty horrific design choice...
RISC / CISC
Frankly, anyone ever checked the Atmel AVR 8bit series instruction set? I may be wrong, but... to me that looks suspiciously like the LD IX instructions of the Z80 ?
And frankly, that thing seems to have more instructions than the Z80, too..
CISC / RISC is not an issue on 8bit CPUs.
There's only 'Enough to get the job done' and 'Why can't I do that'
How much does optimised tools matter?
I once wrote a BASIC(MetaComCo Basic, revised version) variant of 'Erastosthenes sieve' on an Atari 520STfm which used 13 seconds to crunch through the numbers 1 to 255.
The same program reworked to run on the Sinclair ZX Spectrum took 8 seconds.
Atari:
ZX Speccy: (If a(n) = 0, it means that 'n' is a Prime)
How fast can a C64 do the same task using BASIC?
I know of one project that started out on 8080 in assembler. It migrated to x86 easily, thanks to Intel's conv86 program. It migrated to Motorola 68xxx, thanks to me reverse engineering all the assembler and rewriting it in C. Since then it has lived on in PowerPC based embedded units and later ARM. Meanwhile the same code has been used in simulators on Linux and on Windows.
That code is still in use. That is why assembler usage dropped off and the whole point of the overly complex Z80 instruction set becoming useless.
Meanwhile compilers for C and Pascal etc on CP/M had no reason to support such instructions, they were not "natural" for the languages and had little use in writing desktop applications anyway. BDSC for example only makes use of the block move/compare instructions for string operations if I remember correctly.
Comparing a Speccy and an Atari via some BASIC program is not very informative. The speed differences there are more down to the skill of the writer of the BASIC interpreter than the processors running the code.
Erlend
I have to agree with Cluso99 though. No micro has given me so much pleasure to program as the P1, and when it comes to interfacing with hardware it is in a league of it's own. No other micro comes close.
The mighty Zicomp Computer.
Had a Z80 and 128K for every user up to a total of 64 users.
1 main traffic cop controller with a Z80 and 512K
Up to 4 disk controllers each with a max of 2 hard drives each.
You see the first programming I ever did on a micro-processor, as opposed to a mainframe, was in 1980 with the Motorola 6809.
Now, if you are going to program in machine code you might as well write it in HEX. Which we did as we had no assembler for this relatively new chip.
You might as well wire wrap your own dev board. Which we did as we had only the naked chip. We included ROM, RAM, Parallel I/O, Serial I/O, timers, the interrupt controller and a cassette tape storage interface.
Later I had to work with the 6502 and 8080/Z80. Both very depressing by comparison to the glorious 6809.
The 6909 was doomed of course. Not enough like an MCU to continue in that market. Not enough like a 16 bit machine to face the x86 and 68000.
Oh well.
While terrible from a power consumption standpoint, I like it!. This idea was used in the space shuttle in a mechanical sense, a majority of critical control actuators could overpower a incorrect one. In this case there were 4 actuators, but this system could scale up. http://www2.cs.uidaho.edu/~krings/CS449/Notes.S13/449-13-26.pdf
You mean like the Zicomp mentioned here in 1986 : http://www.retroarchive.org/cpm/cdrom/ZSYS/SIMTEL20/Z-NEWS/Z-NEWS.408
And I quote:
I highlight that last sentence because it seems to show that people were complaining about slow and bloated software a very long time ago !
Then you might have "Force Fight" detection on the actuators to detect when one of them is misbehaving and neutralize it.
(goes to read with interest)
6502 uses memory mapped I/O As many ports as you want, but it costs address space, or it costs a bit of hardware logic and time to bank select into said address space.
This made a lot of things really simple to do, many of which can be seen in the Apple 2. Controlling a disk? If you are Woz, it's simple. A handful of chips, serial ROM to make a state machine, and just map it all into some address space where the 6502 really shines at bit twiddling, fast, cycle exact.
Really different approaches here.
Ahh but it was! The 6809 was the powerhouse instruction wise. Assembly programmers dream chip. Frankly, better than either Z80 or 6502.
But if I had to choose, the 6502 would be my choice here. It wasn't as smart of a chip as the others were, but it was lean and mean and it had fast instructions. Couple that with the ability to run the CPU on one clock phase and RAM / video on another phase, and you got a nice, jitter free, fast, computer. Apple 2 started this trend. Tandy did it with the CoCo as well.
Re:6809 doomed
Sadly yes. But I still have one, and it's in my CoCo 3, and it's a LOT of fun to program in assembly language.
So I wrote cross-assemblers for the Z80, 6800 (and family) and the 68705, all on my Singer/ICL System Ten.
You should also remember, back in those days, memory was very expensive. So systems rarely had 64KB.
When I started designing boards for the Apple // (~1982-3), it was only a short time later we were using Apple /// with Microsofts Z80 card and CPM and the 5MB Corvus? HDD. Cheap cross-compilers were available for CPM by then. Our plug-in cards that I designed were based on the Z80 and then the Z8 & Z8530 SCC - they did 2780 and 3270 synchronous protocols to connect to IBM mainframes. My point is that the Apple // & Apple ///'s that we were using (mostly supplied by Apple) also had Z80 cards to run the OS were were using to develop the hw and sw. So both 6502 & Z80 were in each of our micros.
Didn't WANG use Z80's in their word processor minis? There were certainly a lot of professional small computers (pre IBM PCs) that were based on the Z80.
Yep. I remember there being a lot of NorthStar Horizons : http://en.wikipedia.org/wiki/NorthStar_Horizon
and Intertec Super Brains : http://en.wikipedia.org/wiki/Intertec_Superbrain
being around:
One of the most amazing things I have seen is a BBC micro as an integral part of some test equipment for some part of Air Bus flight controls!
Also the Beeb had a Z80 expansion card for running CP/M.
What was brilliant about might makes right for the 6502 was that it eliminated a gate delay, which tightened up all the internal control logic. What boggles me is that this chip has been used continually since the 1970's -- I recently saw a hack of a keychain photo fob that turned out to be based on the 6502 core -- and NOBODY knew how the dang thing worked until the Visual6502 guys did their project to reverse engineer it. I remember in the 1990's the Atari 2600 hackers in particular investigating the "undocumented instructions," treating the chip as a black box and just trying out all the input possibilities, but not understanding why they were getting the pattern of results they got.
To me this is the most amazing thing about the 6502. People have been trading around this set of mask patterns for decades, downsizing them as processes improve and adding on-chip peripherals all over the place, and nobody had any idea for over thirty years exactly how the thing worked.
Yes, in the 90's, figuring out undocumented instructions was very interesting. Some of them are very useful. Others are bizarre. The "might makes right" explains how all those other instructions ended up doing what they did. On the 2600 (VCS) finding any new instructions was a big deal, because cycle timing, tight memory constraints and a largely software driven video system really took all the chip had. Save a few cycles, and suddenly some new video display kernels become possible. That process has gone on for about 30 years where today things are being done on that little system that were thought impossible several times over.
I've been a fan, watching and participating some over the years. Great fun!