Shop OBEX P1 Docs P2 Docs Learn Events
De-Lurk: My P2 Project Plans: — Parallax Forums

De-Lurk: My P2 Project Plans:


The "Grand Plan" is to build a virtual P2 server/minicomputer.


What do I mean by this?

In a headless process I can start any number of arbitrary "P2s". Interaction with them would be done via some kind of client (be it curses, SDL, GL, or even web I suppose). Code loading, simulation, debugging etc... all via those clients.

Then, after a bit I'd build the thing for real.

Here's my current tasklist:
  • Model the processor in OTP code.
  • Bytecode disassembler (Primarily to verify that my parsing of the bytecode is accurate for calling the virtual OpCode dispatch table).
  • Implement each OpCode in the VM.
  • Build some kind of UI (Some kind of variant of a PDP-11/IMSAI 8080 type console would be just the ticket). Being able to stop/start/step/change speed of your system clock is something I miss from the bit-twiddling days. As the whole thing is virtual, it should be possible to do all the introspection you could possibly need to do.
  • Build in hardware ;-)

The other advantage of using OTP to model this system is that I can generate any arbitrary number of P2s and link their pins in any way which means I can conceivably model meshes of P2s before building them.

I'm most of the way through parts 1 and 2. Part 3 scares me a little because I still don't fully understand how all the instructions work.

As we go.
«1

Comments

  • KeithEKeithE Posts: 957
    edited 2018-10-19 01:36
    What is "OTP code"? Erlang?
  • octettaoctetta Posts: 125
    edited 2018-10-19 05:11
    KeithE wrote: »
    What is "OTP code"? Erlang?

    It’s kind of like a standard library of code and task / communication management practices.
  • __red____red__ Posts: 470
    edited 2018-10-19 10:19
    octetta wrote: »
    KeithE wrote: »
    What is "OTP code"? Erlang?

    It’s kind of like a standard library of code and task / communication management practices.

    Back in the 1980s a research group at Ericsson were tasked to design telephone switches that would not break down. They went through lots of different technologies, languages, techniques, and ended up inventing a new language (erlang) and a new way of structuring applications (OTP - Open Telecom Project).

    They achieved nine nines of availability, including zero downtime for upgrades.

    The philosophy behind OTP really is this:
    • You can't enumerate every way that your software can fail so why pretend?
    • Write software structured as many different "actors" who only communicate via messages. Plan for messages to get lost. Plan for your actors to crash. Plan for your data to get corrupted. If any of the above happen, kill all your children and try again (the language does this for you automatically).
    • Failure is considered normal. It is not unusual in any of my busy production applications for me to have hundreds of processes fail a minute yet the application continue to operate flawlessly.

    Most faults are transient - for example, pulling a file from a remote site. How many times have you seen code that says "if I fail to download this, try X times".

    In erlang you'd start it as a "Supervised Process". If the file did not come down correctly, the process would crash and the supervisor would automatically restart the whole process to try again. It would do that up to (configurable) number of times.

    If that continued to fail, that supervisor would die which would then kick off your next set of logic which captured what to do if the site was actually offline completely... and so on.

    Writing your application in "trees" is a different way of thinking, but it produces code that is ridiculously reliable and scales.

  • Getting vaguely back on track - I've started blogging about the P2 and this project. First two articles are up:

    See:
    Article #1: https://evil.red/posts/writing-a-p2-simulator-in-elixir/
    Article #2: https://evil.red/posts/writing-a-disassembler-in-elixir/

    Enjoy!
  • You are going to have fun!

    And you saw what a few of us did. Multicore mini.
  • potatohead wrote: »
    And you saw what a few of us did. Multicore mini.

    I did not and searching the forums doesn't locate anything - got link?
  • kwinnkwinn Posts: 8,697
    __red__ wrote: »
    potatohead wrote: »
    And you saw what a few of us did. Multicore mini.

    I did not and searching the forums doesn't locate anything - got link?

    The Propeller 1 was used to emulate several 8 bit microprocessors, and IIRC some 16 bitters. Try searching the forums for "emulators".
  • __red__ wrote: »
    potatohead wrote: »
    And you saw what a few of us did. Multicore mini.

    I did not and searching the forums doesn't locate anything - got link?

    Oh, I mean as a potential project. There are no links to share. It has just been side discussion.

  • I just posted article #3 in the series...

    https://evil.red/posts/lets-talk-architecture/

    This one is more Elixir heavy than P2 heavy.

    Acticl3 #4 will be P2 heavy and I will be soliciting corrections :-)
  • I am apparently a continuous stream of consiousness this morning...

    Article #4 - Building a Cog
    (at the end of this article we have a cog that does a fetch/execute loop and has implemented a dizzying number of instructions. err.. one, NOP) ;-)

    https://evil.red/posts/building_a_cog/
  • __red__ wrote: »
    I am apparently a continuous stream of consiousness this morning...

    Article #4 - Building a Cog
    (at the end of this article we have a cog that does a fetch/execute loop and has implemented a dizzying number of instructions. err.. one, NOP) ;-)

    https://evil.red/posts/building_a_cog/

    Man, that's a nicely written blog.
    I'm going to give it a read tonight.
  • I think you simplified the PC to much.

    As far as I remember COG ram/lut is addressed in longs, thus adding 1 to the PC is ok there. But HUB is addressed in bytes and you need to add 4 to your PC.

    Because HubExec can also work from not aligned addresses. And if that feature is not removed yet, this is a way to run HubExec in the lower $400, if you use ODD addresses it uses HUB not COG?

    Else I find your blog quite interesting, I never used Erlang and functional programming is completely new for me.

    Keep on going,

    Mike
  • Erlang and functional programming is completely new for me.

    Same. And yes, please do.
  • msrobots wrote: »
    I think you simplified the PC to much.

    Yup, I did and it was intentional. I can't implement reading from the Hub until I build the hub and the hub is a different process that we haven't written yet. Writing the hub is the subject of the next article I'm writing this evening!
    msrobots wrote: »
    As far as I remember COG ram/lut is addressed in longs, thus adding 1 to the PC is ok there. But HUB is addressed in bytes and you need to add 4 to your PC.

    Because HubExec can also work from not aligned addresses. And if that feature is not removed yet, this is a way to run HubExec in the lower $400, if you use ODD addresses it uses HUB not COG?

    But PLEASE keep pointing out stuff like this. I still don't fully understand a lot of what's going on as far as how certain instructions work - ESPECIALLY how jumps, calls etc... work. There's a lot of cargo-cult understanding of the P2 in my mind.

    Tonight I'm going to try and implement:
    * Build the HUB process
    * Write the code to allow a cog to read from Hub memory
    * Load an EEPROM file.
    * Copy to CogMem
    * Set PC, and NOPNOPNOPNOP

    If I have space on the page I'll refactor the Cog Process we can spawn a 1000 P2 emulators running NOPs.

    Thanks for the feedback - it helps a lot.

  • Ym2413a wrote: »
    Man, that's a nicely written blog.
    I'm going to give it a read tonight.

    Thanks, I'm glad you're enjoying it.
    potatohead wrote: »
    Erlang and functional programming is completely new for me.

    Same. And yes, please do.

    Excellent! When you see how the decoding of the instructions works when we get to the more complicated instructions it's going to blow your mind. It's incredibly rare for an if statement to ever end up in my code.

    It helps to know that people are reading it.

    I should upload the source-code as I go to github and tag for each article so people can download and play with it too.

    More to come!
  • Actually, the one thing I'm really struggling with is balancing three competing aims.

    On the one hand, people who are coming from the elixir side may be struggling with the microcontroller concepts.

    People coming from the microcontroller side but with no functional background are going to be struggling with with both the different way that problems are solved in that paradigm AND struggling with new language syntax.

    ... add to that that it's actually a fairly difficult problem to solve in the first place and it is quite a thing to balance.

    If you have questions - please ask.
  • For me, I have two interests:

    One, you are outlining, and potentially building a mini with a P2 chip. I have similar goals, and all you write is relevant and welcome.

    The other is you doing an emulation functionally makes for great context! Functional programming appears somewhat obscure, and getting at the fundamental obstacles to understanding would be useful to me.

  • potatoheadpotatohead Posts: 10,261
    edited 2018-11-14 18:40
    Red, I have a question.

    In the function for instruction decode, there basically is one unique path for each instruction, yes?

    I normally see this done via switch or case statement.

    Or, in assembly, with a shift, maybe add and jump table.

    Or, say something like on X goto. Not really sensible for P2 instructions, but you hopefully get what I am asking here.

    It almost seems to me, on X goto, and or the assembly shift and add are more functional than a chain of if then else statements are.

    It also seems to me I likely do not yet understand that code you wrote, and what exactly makes it functional, and why that matters.

    Is it the direct use of the data?

    Blog post, comment here, pointer, all appreciated.

  • potatohead wrote: »
    Red, I have a question.
    In the function for instruction decode, there basically is one unique path for each instruction, yes?

    Currently, yes. But not in the future.
    It also seems to me I likely do not yet understand that code you wrote, and what exactly makes it functional, and why that matters.

    Perhaps a very brief description of what functional programming is. FP is defined by a few "laws" that seem like limitations, but if you promise to obey said Laws, amazing things can happen.

    Here are (some of) the Laws:

    1. Data is immutable. Once you set a variable, (the term is "bind" in fp), you can't change it.

    Example 1:
    Imperative Language
    -------------------
    x = 1;
    x = x + 1;
    print(x)
    2
    
    Functional Language
    -------------------
    x = 1;
    x = x + 1; # Boom - Won't compile.
    

    What's the advantage? If I have multiple threads running and they share memory, because I can guarantee that a variable won't change the compiler / runtime can de-duplicate across them.


    2. Functions are first class citizens

    Functional programming originated in math so it carries a lot of those ideas. If you think of a mathematical formula, really it's describing a *relationship* more than calculating a value.

    Example:
    Imperative Language
    -------------------
    z = x + y;  # BOOM - variables x and y are undefined.
    
    Functional Language
    -------------------
    x = x + y;
    x = 4;
    y = 3;
    print(x);
    7
    

    This is actually the biggest deal because it means that in much the same way that a mathematician can re-arrange equations, do variable substitutions, simplify and cancel out variables... a functional programmer's compiler can do that.

    In our multi-threaded example - say I have a hundred threads all looking at the same 1G chunk of data in memory. The hundred processes are already looking only at a single instance of the data.

    Let's say that one of the processes wants to change one byte - in the language, it will create another variable which is a copy of the 1G data with that one byte changed.

    In the implementation however, it's stored as "That shared 1G data there, with this single change". In some functional languages actual calculation of values is deferred until the very, very, last minute.


    Still - the reason that FP fits this requirement so well for me is this:

    3. Pattern Matching:

    Here's a sneak peek at what a more complex instruction might look like:
    #          EEEE            0000001           C          Z          I          DDDDDDDDD  SSSSSSSSS        ROL     D,S/#       {WC/WZ/WCZ}
    def dism(<<con::size(4), 0b0000001::size(7) ,c::size(1),z::size(1),i::size(1),d::size(9),s::size(9),>>)
    

    You can see that in the actual function header itself, I have split the incoming 32 bits into separate fields with lengths of 4 (conditional), 7 (instruction), 1 (c), 1, (z), 1, (i), 9 (destination), 9 (source). Those variables are ALREADY decoded into values I can use in my function. No bitmasks / bitmath required!

    The other thing you may notice was that I specified the hard-coded value in the field that I wanted to match on. In most cases, that's only going to be the instruction field.

    What I love about this is four-fold:
    1. The instruction looks EXACTLY like the specification in the document. Not only can we see at a glance if it's wrong or not, we can probably automatically generate these functions from Chip's spec directly.
    2. If your document contains specs that are identical for two different instructions, the compiler will tell you. Remember:
    warning: this clause cannot match because a previous clause at line 3 always matches
      lib/p2_dasm/conditionals.ex:4
    

    Take a look at the spec for bitl/testb:
    EEEE 0100000 CZI DDDDDDDDD SSSSSSSSS        BITL    D,S/#       {WCZ}
    EEEE 0100000 CZI DDDDDDDDD SSSSSSSSS        TESTB   D,S/#       WC/WZ
    

    When I automatically generated that code from the documentation the compiler flagged it for me and that's when I asked the question on the forum about how to distinguish the two.

    Speaking of ...

    2. When you get interesting clauses like the bitl/testb scenario when it's one instruction if C=Z, and a different instruction if they're different, you can put that in the spec. It may look like this:
    def dism(<<con::size(4), 0b0100000::size(7) ,c::size(1),c::size(1),i::size(1),d::size(9),s::size(9),>>) #bitl
    def dism(<<con::size(4), 0b0100000::size(7) ,c::size(1),z::size(1),i::size(1),d::size(9),s::size(9),>>) #testb
    

    What's the difference in the spec? I used c::size(1) for both the C and Z fields. Using the same variable name means that they can't be different values right?... so that matches for CZ being 11 or 00.

    Alternatively I could have written:
    def dism(<<con::size(4), 0b0100000::size(7) ,c::size(1),z::size(1),i::size(1),d::size(9),s::size(9),>>) when c==z #bitl
    def dism(<<con::size(4), 0b0100000::size(7) ,c::size(1),z::size(1),i::size(1),d::size(9),s::size(9),>>) #testb
    


    3. Having the decoding in the function definition removes all the bitshifting and conditional ifs needed inside the function itself. That's huge when trying to work out why the disassembler / emulator chose to go down one path instead of the path you wanted because it's all in one place.




  • potatoheadpotatohead Posts: 10,261
    edited 2018-11-15 03:48
    Thanks for this.

    I am very seriously choking on #1. Off watching videos. I really, really do not get this functional thing well enough to proceed, and #1 is where I just Smile out!

    #2 and #3 make some great sense to me. I can see that, and the state reductions make things simple, clean. I think I understand that more.

    Thinking....

    :D
  • __red____red__ Posts: 470
    edited 2018-11-15 13:58
    potatohead wrote: »
    Thanks for this.
    I am very seriously choking on #1. Off watching videos. I really, really do not get this functional thing well enough to proceed, and #1 is where I just Smile out!
    :D

    Yeah - #1 was the hardest for me too until I made a mental realization that I'd been doing it already.

    Are you familiar with the UNIX shell at all?
    cat /etc/shadow | grep '^root' | cut -d: -f2
    

    The contents of /etc/shadow doesn't change.
    The contents of the pipe between the grep and the cut doesn't change.

    If I were to use the exact same logic as the shell version above in elixir, it would look like this:
    File.stream!("/etc/shadow")                    # cat /etc/shadow
    |> Enum.filter(&(Regex.match?(~r/^root/, &1))) # grep '^root'
    |> Enum.map(&(String.split(&1, ":")))          # cut -d
    |> Enum.map(&(Enum.at(&1, 1)))                 # -f2
    |> Enum.join("\n")                             # convert to string
    

    Output for each stage:
    File.stream!("/etc/shadow")
    ^^^ Contents of complete shadow file, line by line
    
    |> Enum.filter(&(Regex.match?(~r/^root/, &1)))
    ["root:<redacted>:17184::::::\n"]
    Enum.filter "Let the value through if the function inside is true"
    Regex.match? - you can probably guess ;-)
    
    Note we get a list of lines, because there could
    have been more than one matching line
    
    |> Enum.map(&(String.split(&1, ":")))
    [
      ["root",
       "<redacted>",
       "17184", "", "", "", "", "", "\n"]
    ]
    ^^^ Enum.map performs the same operation
    (in this case, split) on every item.
    
    |> Enum.map(&(Enum.at(&1, 1)))
    ["<redacted>"]
    ^^^ Give me the second item (element 1, because
    we start at 0)
    
    |> Enum.join("\n")
    "<redacted>"
    ^^^ Join all the elements in the list into a string,
    \n separated (ie, newline)
    

    Not a single variable modified.
  • potatoheadpotatohead Posts: 10,261
    edited 2018-11-16 06:28
    The shell analogy makes sense.

    I can see going from that to something like SnapChat. Truth is, communication / sharing is more data flow than it is anything else. The functional stuff I understand could work, though I would also argue it may be more memory / resource intensive than it could be. I would also argue nobody really cares, because it can scale. All that seems to be a big win for the functional pioneers right now. Cool beans! (Frankly, I just realized immutability brings rollbacks, history, revisions and a few other goodies all useful in these kinds of spaces for mostly free! Nice.)

    All fair enough.

    But, what if I want to do something like a mandelbrot set?

    I have a hard time even visualizing how I might do that without expensive and complicated to optimize recursion. Does that actually make sense? Going down that road feels an awful lot like some portability arguments.

    And on those, it's all about expected use cases, return on investment. Take this P2 chip. We've had some "make it portable" discussions, and when the reality of highly differentiated hardware play out, it all falls flat. Using defines and other complicated tools to make one code body work in very different ways is of debatable value, given two simpler code bodies and a common API are also possible. Again, expected use could balance all that out.

    I also submit it's just not possible / practical to completely eliminate state. Given that is true, or at the least very high on the suspected true list, there will be a line where functional makes sense, until it doesn't, and when it doesn't?

    Thanks for your posts. They are good answers to my questions. I just have more questions and am going to mull it all over for a while.

    In any case, I love your project, and will continue to read with interest, but do not want to sideline this and make it about my own confusion / exploration.


  • potatohead wrote: »
    I have a hard time even visualizing how I might do that without expensive and complicated to optimize recursion. Does that actually make sense?

    Sure, because we're used to thinking of each iteration of a recursion being implemented by a position on the stack. Therefore, if I recurse one billion times, there should be 1 billion "returns" at the end of my recursion which will cause the stack to explode and die.

    That's an "implementation issue"[tm].

    The way that erlang / elixir does it is something called "Tail Call Optimization". It's the idea that *as long as* the last command in your function is the recursive call, then you don't actually have to allocate another stack frame. If you think about 'unwinding' the previously discussed billion returns, they all end up in the same place.

    Therefore, the compiler just uses a single stack frame:

    https://en.wikipedia.org/wiki/Tail_call
    And on those, it's all about expected use cases, return on investment. Take this P2 chip. We've had some "make it portable" discussions, and when the reality of highly differentiated hardware play out, it all falls flat. Using defines and other complicated tools to make one code body work in very different ways is of debatable value, given two simpler code bodies and a common API are also possible. Again, expected use could balance all that out.

    Honestly, I'm not fully understanding the link between functional and portability - help?


    I also submit it's just not possible / practical to completely eliminate state. Given that is true, or at the least very high on the suspected true list, there will be a line where functional makes sense, until it doesn't, and when it doesn't?

    We don't eliminate it, we put strong boundaries around it.

    Let me give you two examples:

    1. elm-lang - Elm is a functional language that compiles to JavaScript. It's a language that, if your code compiles, guarantees no run-time errors. The word "guarantee" is a strong one, but it's working(!)

    The way it works is this:
         Model              View                         Update
    (Your State) --> (Turns your Model --> (Takes signals (button clicks etc)  -+
                      into HTML / DOM)     and uses that to make a _new_ State) |
           ^                                                                    |
           ---------------------------------------------------------------------
    

    ... because the State can only be "changed" (really, replaced) in known ways the compiler can do ridiculous amounts of optimization. Elm-lang is the fastest web framework there is right now.

    2. Elixir

    Actually - I'll go ahead and write the elixir version in the blog but it's the same basic principle. State lives inside a separate process that can only accept outside stimulation via messages. Each function that responds to a message has to return what the new State for the process will be.

    (In reality, GenServer is really an infinitely recursive program taking the last iteration's state, receiving a message, creating what the new state should be and then calling itself again).

  • potatoheadpotatohead Posts: 10,261
    edited 2018-11-16 22:57
    Good stuff Red. I am going to follow with interest. Let's table this for a time. (Edit: by all means continue if it makes sense. I personally want to grapple with what I learned. Do not grok what it means yet, or how it all maps down to ways to use a P2.)

    Allow for the sinking in of new ideas. And let more of your work play out. May help!

    I did understand tail call after my last effort. I feel that is the first of new ideas which could make programming look very different than it is today.

    In any case, you donegood. I have lots to mull over. And some progress made on this topic. (compartmentalizing state helps a lot!)

    I can follow your project better now, and that's good.

    :D

  • potatoheadpotatohead Posts: 10,261
    edited 2018-11-16 23:20
    And I lied... one more set of thoughts, before I just let it all sink in for a while.

    I think I am understanding there must be impure, from a functional point of view, code in many cases. Ok. Rather than attempt to employ this way at all times, it can be employed sometimes. And that happens for a lot of reasons. Scale, resources, lack of understanding on how...

    Would it be rational to qualify on some basics?

    Like does data flow, or change. Where it flows, functional is indicated. Where it needs to change, it could be functional as in create new rather than modify, or if not, that code body gets compartmentalized? Secondly, create new does imply the old is kept. But it does not have to be kept, which speaks to the tail call optimization, as an easy example. And I think the beauty here is we don't really care whether it's kept or not, because the compiler can actually know whether or not doing that is required.

    Another thought boils down to why this whole affair is called functional. That appears surprisingly hard to nail down.

    Seems to me there is explicit code, which is functional code. You see all you need to see. It is explicit. No "side effects." If you do not see it right there in the code, it is not happening? It works the same way no matter what.

    Not functional then, can be simplified down to anything implicit? This is state, side effects, etc. It may work in radically different ways depending on... (insert the usual suspects here)

    I apologize for the pedantic nature of this. But I really can't quite nail down what people mean by functional.

    We have a long-running conversation here about the difference between concurrent and parallel, for example. This is slippery in similar ways.




  • potatohead wrote: »
    I think I am understanding there must be impure, from a functional point of view, code in many cases. Ok. Rather than attempt to employ this way at all times, it can be employed sometimes. And that happens for a lot of reasons. Scale, resources, lack of understanding on how...

    We should probably define "pure" and "side-effects"

    Function purity basically means that every time you execute a function with the same inputs, you get the same outputs.

    The REALLY cool thing about that is that all your function-calls suddenly turn into lookup tables. Does your function take an hour to run because of really complex operations? Do you call it 10 times in different parts of your code with the same arguments - BOOM, optimized out.

    Side-effects are anything that happens outside the function that isn't either a direct input or output to the function.
    Would it be rational to qualify on some basics?

    Like does data flow, or change. Where it flows, functional is indicated. Where it needs to change, it could be functional as in create new rather than modify, or if not, that code body gets compartmentalized? Secondly, create new does imply the old is kept. But it does not have to be kept, which speaks to the tail call optimization, as an easy example. And I think the beauty here is we don't really care whether it's kept or not, because the compiler can actually know whether or not doing that is required.

    Full disclosure, not every functional language implements all of the Laws. For purity, see Haskell and Elm-lang.

    Erlang / Elixir is not pure out the box. It's also not typed. Why? Erlang was originally designed to run telephone switches - in other words, when there's downtime, people die.

    "I'm sorry, you can't call 911 between the hours of midnight and 3am due to switch upgrades" -- Never Happened.

    Erlang is designed to do hot-code reloading. In fact, when I'm developing, I do hot code reloading constantly instead of re-running my application.

    When they designed the language they sacrificed strong typing for the ability to do hot-code loading. Erlang is pragmatic.
    Another thought boils down to why this whole affair is called functional. That appears surprisingly hard to nail down.

    Pure functions are math functions.
    Seems to me there is explicit code, which is functional code. You see all you need to see. It is explicit. No "side effects." If you do not see it right there in the code, it is not happening? It works the same way no matter what.

    Also, functions are first class citizens, data is immutable, recursion over looping, obsessions with operations on lists...
    Not functional then, can be simplified down to anything implicit? This is state, side effects, etc. It may work in radically different ways depending on... (insert the usual suspects here)

    Here's a puzzle.

    How does one write a pure function that generates a random number?
    How does one write a pure function that inserts a row in a database with a unique constraint?
    How does elm (a completely and provably pure language with no side-effects) integration with other javascript modules?

    anyways - brb - going to finish this genserver primer.
  • __red____red__ Posts: 470
    edited 2018-11-17 12:11
    @potatohead - a quick GenServer primer, should help with the concept of "state" too.

    https://evil.red/posts/genserver_abridged/
  • Also - another new post - Building a Hub!

    https://evil.red/posts/building_a_hub/

    The hub:
    [*] Copies the first 1024bytes (512 longs) into hub RAM
    [*] Creates COG0, copying the same 512 longs into Cog RAM
    [*] Cog0 is now capable of executing all the instructions it knows about (umm, NOP) ;-)

    Side-Question: I always thought the P1 copied all the data it could get from the EEPROM into HUB RAM. I wonder why we only copy the first 1024 bytes in the P2?

  • Another new post:

    * Implementing CNT.
    * Implementing JMP (relative, cog to cog)

    https://evil.red/posts/finding_a_pulse/

    Here's what the output looks like after today:
    [red@evil:~/projects/p2_dasm]$ iex -S mix
    Erlang/OTP 20 [erts-9.3.3.3] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:10] [hipe] [kernel-poll:false]
    
    Compiling 3 files (.ex)
    Interactive Elixir (1.7.4) - press Ctrl+C to exit (type h() ENTER for help)
    iex(1)> {:ok, pid} = P2Dasm.Hub.Worker.start_link("nopalooza.eeprom")
    {:ok, #PID<0.166.0>}
    iex(2)> P2Dasm.Hub.Worker.startClock(pid)
    :ok
    cnt: 0: pc: 0 -> NOP
    cnt: 1: pc: 1 -> NOP
    cnt: 2: pc: 2 -> NOP
    cnt: 3: pc: 3 -> NOP
    cnt: 4: pc: 4 -> NOP
    cnt: 5: pc: 5 -> NOP
    cnt: 6: pc: 6 -> NOP
    cnt: 7: pc: 7 -> NOP
    cnt: 8: pc: 8 -> JMP #$-8
    cnt: 9: pc: 0 -> NOP
    cnt: 10: pc: 1 -> NOP
    cnt: 11: pc: 2 -> NOP
    cnt: 12: pc: 3 -> NOP
    cnt: 13: pc: 4 -> NOP
    cnt: 14: pc: 5 -> NOP
    cnt: 15: pc: 6 -> NOP
    cnt: 16: pc: 7 -> NOP
    cnt: 17: pc: 8 -> JMP #$-8
    cnt: 18: pc: 0 -> NOP
    cnt: 19: pc: 1 -> NOP
    cnt: 20: pc: 2 -> NOP
    cnt: 21: pc: 3 -> NOP
    cnt: 22: pc: 4 -> NOP
    cnt: 23: pc: 5 -> NOP
    cnt: 24: pc: 6 -> NOP
    cnt: 25: pc: 7 -> NOP
    cnt: 26: pc: 8 -> JMP #$-8
    cnt: 27: pc: 0 -> NOP
    iex(3)> P2Dasm.Hub.Worker.stopClock(pid)
    :ok
    
  • (this btw is a good example of me guessing why I'm seeing the numbers that I'm seeing. I'm still not fully following the logic behind the branching instructions. If someone could sanity-check my logic I'd sure appreciate it).
Sign In or Register to comment.