Shop OBEX P1 Docs P2 Docs Learn Events
Assembly Language Question - Page 3 — Parallax Forums

Assembly Language Question

13»

Comments

  • jmgjmg Posts: 15,140
    Yanomani wrote: »

    That looks like a part number alias for the 27E512 parts they offered I mentioned above, but I suspect they retired 27E512 as google prefers the 27C512 :)
    Looks to need 12Vpp PGM and 14Vpp Erase.
    Most universal programmers should support those.


  • Heater.Heater. Posts: 21,230
    I did once see an EPROM not working.

    I had managed to plug it in backwards. The whole die lit up like a light bulb. Briefly.

    It's nice to be able to stick them under a microscope and see all the traces on the chip.
  • Cluso99Cluso99 Posts: 18,066
    Heater. wrote: »
    I did once see an EPROM not working.

    I had managed to plug it in backwards. The whole die lit up like a light bulb. Briefly.

    It's nice to be able to stick them under a microscope and see all the traces on the chip.
    Or the burnt die ;)
  • Heater.Heater. Posts: 21,230
    Something is missing from my 8088/V20 board.

    I removed the real-time clock daughter board to see what is underneath:

    1) Intel 8254 Timer
    2) Intel 8274 Multi-protocol Serial Controller Dip40 (Great I can do SDLC!)
    3) LT1081 RS232 drivers.

    Then I realized. There is no 8259 interrupt controller on this board. WTF? I don't think I have ever seen an 8088/86 board without an 8259 on it.

    I start to worry. Where are those serial and timer interrupts going? Are they handled by the NMI pin? If they are handled by the PALs on board I might never figure it out.

  • Cluso99Cluso99 Posts: 18,066
    Aha :) SDLC. Brings back memories of the early 80's.
    I designed and programmed a Z8681 + Z8530 SCC + SRAM card that went into an Apple //e and Apple/// which we sold to Apple. Did Comms to IBM mainframes including 2780 and 3270/3274 emulation with the latter running SDLC.
  • jmg wrote: »

    That looks like a part number alias for the 27E512 parts they offered I mentioned above, but I suspect they retired 27E512 as google prefers the 27C512 :)
    Looks to need 12Vpp PGM and 14Vpp Erase.
    Most universal programmers should support those.

    Sorry, you are right, the needed 14V erase voltage had escaped from my eyesight.

    Rochester Electronics has 12V-only Erase and Programming voltages, Catalyst Semiconductor's CAT28F512HI-12 in stock, at US$ 3.46 a piece.

    @Heater

    I'm unsure about wich package they have in hand; you'll need to know, in order to create a breakout/converter board, to addapt its 32-pin pattern to the 28-pin ones, present at your board, and also provide some means to connect them to the propeller, for properly programming, before using them.
    Heater. wrote: »
    Something is missing from my 8088/V20 board.

    I removed the real-time clock daughter board to see what is underneath:

    1) Intel 8254 Timer
    2) Intel 8274 Multi-protocol Serial Controller Dip40 (Great I can do SDLC!)
    3) LT1081 RS232 drivers.

    Then I realized. There is no 8259 interrupt controller on this board. WTF? I don't think I have ever seen an 8088/86 board without an 8259 on it.

    I start to worry. Where are those serial and timer interrupts going? Are they handled by the NMI pin? If they are handled by the PALs on board I might never figure it out.

    As time goes by, perhaps you will come to the same conclusion I've got from my own experiences, already mentioned in an earlier post; the closer your circuit ressembles an almost-full In Circuit Emullator for your intended cpu design (V20 at the present case), the easier it will be for you to get its full resources mapped.

    With a Propeller, a bunch of those '245-alike bi-dir level converters and a few more active and passive parts, easily distributable over a not that large, two-layer pcb, you'll be ready to start, in a snap! Perhaps you can also have enough room for the eeprom/flash programmer, as a bonus.

    Since in that case, speed will not be a concern, you could setup the connection to the 40-pin cpu position at the board, using flat cables and a machined 40-pin socket to be inserted into the cpu one.

    Thereafter, you'll be at your prefered seara: software....

    With the faithfull help of your "supercharged" logic analyser and doing a funny "peek-and-poke-around game", even using Basic, and all the pesky pal/gal-based decodings will expose their bad-smelling guts!

    Henrique
  • Taking a closer look, helped by IrfanView software loupe (and a bit of "image sharpening" tool), the last concern would be removing the crystal and controlling the AMD 8284A clock generator, layed between it and the V2, by yourself. They are exact clones of Intel's 8284A.

    pdf.datasheet.live/9fe118a5/intel.com/8284A-1.pdf

    But don't worry that much about it as being some huge chalenge to be faced; it's a standard and well doccumented part that uses some of the signals provided by the V20 itself, and thru a little set of easily-mapeable synchronizers, produces some outputs (including READY, that can be used to invoke the insertion of a WAIT state, lenghtening bus access cycles, thus adapting the pace of bus-related data exchanges with slow-paced peripheral devices), that return to the CPU, and eventualy, could also be connected to other logic interfaces surrounding it.

    To avoid any further and unforeseen surprises, it would be good to have a better-defined picture of the circuit board, in order to catch the eyes attention to, for example, one-shot pulse generators that could have been layed around, inserted into the path of some signal(s), to lenghten/shorten their duration, thus needing some attention to setup and hold times, before grabbing the resulting interface and data signals from the whole circuit, comming into the propeller or just displayed at the LA, in order to do a better analysis of their behavior.
  • YanomaniYanomani Posts: 1,524
    edited 2018-09-24 13:58
    Fortunately, with the well acknowledged help of Mr. Jim William (sadly, already deceased) Linear's application/design notes, there are a lot of good advice for the construction of the 12V, Vpp power source.

    analog.com/media/en/reference-design-documentation/design-notes/dn58f.pdf

    analog.com/media/en/reference-design-documentation/design-notes/dn017f.pdf

    analog.com/media/en/technical-documentation/application-notes/an31.pdf

    P.S. Perhaps you could find lots of other circuits, mostly charge-pump-based, including flip-flop and '555-driven ones, promoted as being able to generate that pesky 12V.

    For my own uses, I prefer the best I can find; the well documented and debuged they'll come, the better.

    Badly programmed, or even fryed chips, are a headache no one needs to have, overcomplicating already challenging pursuits.
  • Heater.Heater. Posts: 21,230
    Cluso99,

    SDLC was great. Back in the day we implemented SDLC loop communication that ran around tens of machines in factories. Reporting status/faults/product quality etc. The central system receiving all of this was a PDP 11.

    It was such a great upgrade from the 8 bit parallel bus they had previously. Which I think dated back to a time before they even had microprocessors in the machines control systems.

    Heck, they still had some parts of those machines controlled by hydraulic logic built with spool valves and pressure sensors. Nobody had figured out how that hydraulic logic worked so it did not get subsumed in to the microprocessor world.




  • Heater.Heater. Posts: 21,230
    Yanomani,

    I'm loving all the great suggestions you have there. The voice of experience. And giving it all serious consideration before I do anything.

    This is all a blast from the past for me. I designed my first TTL circuits back in 1971 or so. In the early '80s I made a little board with an 8088, the 8284 clock generator and some other glue on it that plugged into the 8085 socket of an existing product. It worked a treat. As a result the hardware guys were inspired to quickly spin up a new board using the 8086.

    The idea of using a Propeller to drive this V20 socket sounds like a similar problem.

    However, it has to be said that now a days I take more of a software hackers approach to the problem. For example...

    1) If I can get some code into the machine for it to run then I own it. Which I can do by programming an EPROM.

    2) With some simple code I can scan the entire memory and IO address ranges. Even if I don't have any RAM to work from because I have not found it yet.

    3) With that scan and a little probing of chip selects/read write lines with my oscilloscope I can figure out the addresses of RAM, serial chip etc.

    4) With knowledge of where RAM and serial IO is I can make a bit more complex code in the EPROM than accepts commands over serial and does what I want. Including boot loading more code to run.

    In fact, I suspect that if I just power it up, let it run it's existing code and watch the selects etc with a scope I can find the RAM addresses without doing any EPROM programming. That is steps 1), 2) and 3) taken care of.

    Also, I was contemplating the idea that an EPROM programmer could also read the EPROM content first. That means I have a binary image that can be disassembled. Thus revealing RAM addresses and who knows what details.

    Anyway, the good news is that it looks like I'm about to acquire a second board like that. With a test rack with back plane, power supplies and all.

    I'll try and get a better picture of the board and annotate what I think is what so far. Apart from the PALs it all looks pretty straight forward.
  • jmgjmg Posts: 15,140
    Yanomani wrote: »
    Sorry, you are right, the needed 14V erase voltage had escaped from my eyesight.
    ....
    P.S. Perhaps you could find lots of other circuits, mostly charge-pump-based, including flip-flop and '555-driven ones, promoted as being able to generate that pesky 12V.

    That 12V/14V is only slightly more annoying that 12V alone.
    My first suggestion would be to dig out / locate an old universal programmer that supports the Winbond parts.
    Otherwise, it's just 2 control pins and a linear or LED/Boost flyback regulator in a roll-your-own design.
  • Hi Heater

    Thanks for all the kind words, but be ensured by myself, that I ever have many more reasons to be grateful to each and everyone that behaves like you do, at the forums, and sure, to all the people at Parallax, indeed.

    Armed with a loupe, I was chasing some V20H at the internet, slightly changing the search terms, untill I found a good offer...

    https://ebay.com/itm/5pcs-D70108HCZ-16-D70108HCZ-NEC-Encapsulation-DIP-V20HL-V30HL-16-8-16-BIT-/261297750463

    I don't understand why, but i had to type "D70108HCZ" to reach that link.

    I don't know the seller reputation, other than the one advertised at the link, by ebay, nor have any other info about him or she, but since that version of the chip would enable you to try doing some clock-stepping at your hardware, with very little efforts from your part, and it appears to be an almost irresistible deal, at such prices, I believe it's an opportunnity for you to don't let them go far away.
  • jmg wrote: »
    Otherwise, it's just 2 control pins and a linear or LED/Boost flyback regulator in a roll-your-own design.

    Yes, and no, at the same time.

    I'm not sure these Winbond parts were not pulled from some lot of surplus motherboards, nor any other non-pristine condition source.

    Who knows how many times they had been slapped by some rock-hard abuses, during their lifetime?

    analog.com/media/en/reference-design-documentation/design-notes/dn58f.pdf

    Taking Jim William's advice, expressed at the above linked design note, including figure 6, with an image of some destructive ringing possibilities, I surely prefer to imagine them being gently cared.

    But, sure, can be only me, being extra-careful.
  • Heater.Heater. Posts: 21,230
    edited 2018-09-25 01:54
    I can't find anydata sheet for a V20 that indicates it can tolerate a static clock.

    This datasheet includes AC characteristics for the uPD70108H-10 that indicates a minimum clock period of 100ns.

    Mine is only a -8 so I guess that is 125ns.

    Whist I always love to pick up vintage chips I'm going to pass on that one. I have too many here already, from 8085 to 68000.

    Edit: OK I found it. This sheet says "...fully static circuits are employed..." in the uPD70108H "...thus allowing a clock stop funcion..."
    http://www.datasheet.hk/view_online.php?id=1131329&file=0065\upd70108hg-10-22_506148.pdf
  • YanomaniYanomani Posts: 1,524
    edited 2018-09-25 03:47
    Hi Heater

    Just remembering... period (T) is the inverse of frequency (f), wich is given by the formula f=1/T <=> T=1/f. Thus, when you find a non-fully static design, you'll be faced with a maximum clock period specification.

    Beyound that limit, the internal dynamic components (dynamic register memory, dynamic flip-flops, etc.) would not be garanted to retain their correct state, nor properly sequencing all of the circuit states (stages), thus causing some (many, in fact) unintended behaviour.

    The original V20 datasheet does realy has that spec, with a value of 500nS (=2MHz) for the maximum clock period.

    As for the V20H, there is no spec for its maximum clock period, wich is a normal specification for a fully static design, in that case, a fully static cmos design.

    Henrique
  • Heater.Heater. Posts: 21,230
    Oops, yes, I was tired enough I got my frequency and period upside down.

    I did find the right datasheet though, link above, it clearly states:

    Operating Frequency Min DC

    The MAX field of the clock cycle table is a blank.

  • kwinnkwinn Posts: 8,697
    After looking at the data sheet you posted it sure looks like it would be simpler to build an eprom programmer and write the code to scan the memory and I/O address range. Forgot how much fun the multiplexed address/data bus/status bus chips could be, particularly when the small/large mode choice is added to the mix.
  • Blank = go as fast as you need, good luck, works until it doesn't
  • And its still an amazing board.

    You should see the ones that the MTA had built to run their current turnstyles. The technology is decidedly up to date, and works perfectly, but the machinery to sell their cards, isn't.

    Let's just say Heater it would get your appropriate response against Windows going.

    I simply point out that the technology isn't appropriate for working in a user-hostile environment. (Underground)
    ---
    And this message is being sponsored by the Yeti Mountain Rescue Society.
  • Heater.Heater. Posts: 21,230
    @potatohead,
    Blank = go as fast as you need, good luck, works until it doesn't
    Ha, now you are doing it. Getting frequency and period upside down.

    "The MAX field of the clock cycle table is a blank." means the clock cycle has no upper limit in it's length. We can go a slow as we like. Unlike the older V20 where the clock had to run at a minimum frequency.

    This can be useful for manually stepping the clock and watching what happens.

    But I was thinking, the 8088 and V20 have a HOLD input which basically stops execution. So rather than control the clock one can control HOLD to much the same effect. With the complication that HOLD needs to be synchronized with the clock.

    Which reminds me of a story...

    Back in the day the company I worked for had been building hundreds of boards using the 8085. These had two 8085's working together and sharing some RAM. One day a new batch of boards had 100% failure rate. Lot's random corruption and crashes.

    After a lengthy investigation I noticed that the new batch used 8085 chips from a different Intel FAB. Same part number different FAB.

    Turned out the design used HOLD to arbitrate which CPU had access to the shared RAM but it was not synchronized to the clock. Chips from one fab tolerated this, chips from the new FAB did not.

    An extra flip-flop in there fixed. I never understood why I, the software guy, had to debug and fix the hardware guys stuff.

    If only we had the multi-core Propeller at that time. It would have fit the application perfectly.




  • OH Smile. I DID. This is catching Heater!
  • An extra flip-flop in there fixed. I never understood why I, the software guy, had to debug and fix the hardware guys stuff.

    Nothing puts a grin on my face like software engineers with a scope in their grasp. An early mentor encouraged me to do that, and told similar stories.

    As for why?

    Simple: software changes can happen right up to ship, and even after, and can be distributed and deployed sans the higher physical cost of anything hardware.

    Plus, people can't see what a bodge it may be, unless they too grok software and go looking.

    It follows from there.
  • Heater.Heater. Posts: 21,230
    edited 2018-09-26 19:28
    Oh yes, scopes and logic analyzers even humble mutimeters were an essential part of my software tool box for many years.

    Not just for verifying software operation, timing etc, but sometimes it was the only way to convince that hardware guys that the fault was on their side of the fence :)

  • kwinnkwinn Posts: 8,697
    Heater. wrote: »
    Oh yes, scopes and logic analyzers even humble mutimeters were an essential part of my software tool box for many years.

    Not just for verifying software operation, timing etc, but sometimes it was the only way to convince that hardware guys that the fault was on their side of the fence :)

    Hmm, I recall the same experience from the other side of the fence a few times. Also a time or two where the fault could legitimately be on either side depending on ones perspective.
  • Heater.Heater. Posts: 21,230
    edited 2018-09-27 02:48
    You mean you were on the hardware side of the fence and some software guy showed up faults in your hardware ? :)

    It's an age old battle, is the fault hardware or software. Sometimes it gets personal as blame is thrown around.

    Some classic examples:

    1) The Intel 8059 interrupt controller would register short glitches on any interrupt input as interrupt 7. This was even, literally, referred to as a "feature" in the datasheet. First and only time I saw a bug actually referred to as a feature for real. Well, why is that short glitch even getting to the chip? Why is it causing int 7? No worries, the software guys face palm muttering about f'ing hardware and deal with it in code.

    2) I found a custom radio modem ASIC and board I worked with on a military communications project had no output to wake the processor when a packet arrived. This meant it had to be polled, the processor could not sleep and battery life fell below specification. The hardware guys said it was impossibly complex and expensive to fix, requiring a whole board redesign and months of delay, and refused to do it. Us software guys figured out what changes the hardware needed, how simple it was, and even exactly the tiny changes to their PCB layout it would take. Which actually made the layout simpler.

    3) One project used a custom, in house, CPU was used in a multi-processor project. The software suffered random corruptions and crashes. I found the version of the CPU chip the hardware guys delivered did not have the LOCK instruction working so data sharing got messed up.

    4) One software had a bug filed against it because it reported the wrong speed of a multi-million dollar machine. Scope in hand I found this was not even an electronics hardware problem. They had built the machine with the wrong gear ratios in it. It was quite a lengthy and expensive strip down for then to fix that.

    5) We discovered the Intel 286 would always give the wrong answer for MUL if, if the operand was immediate and negative. Bug finally acknowledged by Intel under NDA.

    6) In a certain airliner fly by wire system I found the software would halt with an exception if an out of range value arrived on some input, as it should have. Turned out the hardware guys had missed that range specification and out of range inputs were normal operation in some situations. I forget how that was resolved.

    7) On another fly by wire system we found the watchdog hardware was unreliable. WTF? In a bizarre solution they ended up adding a watchdog to the watchdog!
    It was too expensive to make changes to the existing hardware and get it all re-qualified.

    8 ) Here is a biggie: Back in the day British and US fighter jets used the same IFF system https://en.wikipedia.org/wiki/Identification_friend_or_foe but slightly different versions. That was fine until the Gulf War. Then they realized there was a good chance of Brits and Yanks shooting each other down because they could not identify each other. They took a young guy off our software team to implement a kludge in the Brits IFF hardware to make them compatible. It turns out he had designed the kludge some years before and suggested it be implemented, the idea was rejected. It was very lucky he kept his schematics and design notes in a bottom draw all that time. In a mammoth effort working 24 hours a day for a week or so he got that kludge implemented and rolled out to the Gulf.

    Anyone else have personal examples of this battle?

  • Cluso99Cluso99 Posts: 18,066
    edited 2018-09-27 13:34
    The Singer mini-computer where I was trained on the hardware had a world-wide policy that hardware engineers couldn't transfer to software and visa-versa. One guy had left the company and came back some time later in the other role. I was the first to transfer from hardware to software without leaving. A few years after I did leave and started my own company designing add-on hardware, and also writing commercial software, I was recruited as a Hardware/Software Specialist on the mini. They finally realised there were cases where it was necessary to know both sides.

    And an example...
    Knowing the hardware intimately, I wrote a backup program to copy 10MB discs in just under 4 minutes instead of 10 minutes each.
    When I installed my program, I often found that the copy would take in excess of 20 minutes. I would tell the engineers that one of two setting was incorrect and the mini was running slow. Had many an argument until they checked and corrected it. Users were extremely happy as their mini ran much faster afterwards.

    Oh, and guess what...
    The mini could have up to 20 processes (hardware time sliced) with their own memory (think cog memory) and a common shared memory (think hub memory). And the risc instruction set was memory-to-memory, and the branch instruction had a variant (link) that stored the return address in memory pointed to by the extra address in the instruction (think jmpret).
    Sound similar to anything you know ???
    What was different was this mini was decimal. Memory addressing, add/subtract/multiply/divide, etc was all decimal !!! Chip... P3 ???
  • Cluso99 wrote: »
    What was different was this mini was decimal. Memory addressing, add/subtract/multiply/divide, etc was all decimal !!! Chip... P3 ???

    Hi Cluso99

    Despite I, personally, have nothing against decimal math operations as being an integral part of any ALU architecture, I have a lot of concerns about using them at the innermost guts of counters, pointers, indexers, anything that is suposed to mostly grew in single steps, or a bit at a time, wich you prefer, for short.

    IMHO, for the above mentioned applications, Gray Code seems to be a lot less prone to running conditions and other errors that historically plagued sequential addressing technics.

    As an example, an asynchronous fifo design can suffer from a severe infant mortality rate, if its read/write pointers design relyes, e.g., onto binary counters. Decimal counters, if used to drive the same structures, would surely worsen the bad effects at all.

    The best references I ever used when learning about that subject can be found at the following Sunburst Design link, those guys are really good, worths reading them all.

    sunburst-design.com/papers/

    Henrique

    Addit:

    Back in 1975, my first paid job (I was 19) at the computer industry was designing software for the Compucorp 300 and 400-series programmable calculators, which had, in fact, early and way beautifull multichip microprocessors inside them.

    Working with them, I was leaded to assembly language programming, because the 400-series does effectively have both high and low-level instructions readily accessible. They had a special bit, within a control word, that enabled us to fully operate their ALUs in binary or decimal modes, alternately.

    After Compucorp, I was invited by a local electronics engineering faculty, to sunk my brain into Fairchild's F8 architecture, thru a Formulator (1977), because they've acquired the computer to be used as a seed and promoter for some digital-logic-related courses, but they don't had anyone, into their regular staff, that did understood a dime about it. It appeared that they were used to big mainframes, and twisted their noses when faced with something that small.

    For me, it was a huge upgrade, because I could learn and use it, being paid to do so, and also enjoy both its 13" HP CRT and a noisy ASR33 teletype machine, as the keyboard and printer of the Formulator. For me, both were my first contacts with such kind of equipment, and I loved them.

    Soon I've acquired my first personal computer and brought it home, with me;:a Heatkit H89, wich uses to have two Z80s inside.

    Both F8 and Z80 architectures have DAA instructions, though their operation has some differences, particular to each one.

    Despite the above nmentioned concerns, you can count me amongst the decimal enthusiasts.

    HP crt I've used (sitting on top of a HP9845 of the time), they where charged US$ 6,000.00 by HP, solely for the monitor! A real steal!

    hpmuseum.net/display_item.php?hw=149

    Faculty's owner was also one of the largest Fairchild distributors of the time, thus they sure had some promotional disccount on Formulator's price.

    computerhistory.org/collections/catalog/102743804

    The ASR33 was a regular stock machine, ugly as nothing I ever saw before but it was my working terminal!
  • kwinnkwinn Posts: 8,697
    Heater. wrote: »
    You mean you were on the hardware side of the fence and some software guy showed up faults in your hardware ? :)
    ......

    Not exactly. I was on the hardware side of the fence and software bugs mimicked known hardware faults.
    The most memorable one was a liquid scintillation counter where the electronics had been updated to control the mechanics and perform some calculations using a calculator chip. Rube Goldberg would have been envious of that kludge.

    The instrument had a 300 vial sample conveyor, and used microswitches to detect samples and an "end of run" plug that was placed after the last sample. A bad "end of run" microswitch would either stop the run before all the samples were counted or count them multiple times.

    This instrument would intermittently count the samples multiple times. Typically this indicates a bad microswitch, so I replaced it. Problem still occurs. After many hours of troubleshooting I finally realized that the problem occurred when the "end of run" plug came after 9 samples. Then checked with 19 samples - same thing. Turned out the software did not check the microswitch status when the carry bit was set.
  • jmgjmg Posts: 15,140
    Yanomani wrote: »
    Despite I, personally, have nothing against decimal math operations as being an integral part of any ALU architecture, I have a lot of concerns about using them at the innermost guts of counters, pointers, indexers, anything that is suposed to mostly grew in single steps, or a bit at a time, wich you prefer, for short.

    I'm not really following this ? - if you mix binary and decimal, then sure, like any mixed system you can expect issues, but a system that is entirely decimal has no SW issues with counters, pointers, indexers.

    The big problem with decimal, is more physical/memory design related, in that you effectively waste decode bits, and your system is now slower than it might have been, and may need more pins.

    Lets take a 24b design, as an example
    2^24 = 16777216
    10^6 = 1000000
    So the same address decode depth, here reaches 16x more memory in a binary design, than in a decimal one.
    Adding another decimal digit, bumps to 28 bits in the decimal design, but it's still less memory than the binary one.
    ..and this before you even look at interconnects with other systems.
  • Heater.Heater. Posts: 21,230
    Bottom line is that if you want to build a machine to do arithmetic it simpler, quicker, cheaper to do it in binary.

    Also no mater what number base you use there will be errors due to the limited number ranges and resolution we can build in hardware.

    However, those guys that deal in money, banks, bookkeepers etc would really like the computers they use to make the same errors as they do by hand. They work in decimal so they don't like the different errors you get in binary.

    I suspect that most of the efficiency we gain by building binary computers is wasted in fixing things up to satisfy the bookkeepers.

Sign In or Register to comment.