Shop Learn
How can I help develop MicroPython? — Parallax Forums

How can I help develop MicroPython?

The github repository seems inactive.
The last update was Dec 13th 2020.
I am not able to post or read any issues there.
Because there are no issues, I do not know what I should work on.

I know that MicroPython is strategically very important to Parallax. So I would expect more activity in this area. How do I join in?


  • Ken GraceyKen Gracey Posts: 7,322
    edited 2021-06-28 16:26

    We would welcome the contribution. TeamOz @Tubular and @ozpropdev control the GitHub and are copied in on this thread.

    It remains a priority of ours to bring the P2 MicroPython port to the masses since so much has been accomplished. I have a few things on the wish list:

    • Documentation and how-to examples of all kinds
    • Ability to run existing MicroPython modules
    • Officially create a P2 "type" in Code with Mu

    We are wrapped up in the Spin2 documentation internally and can't do much except cheer the effort on, provide hardware to contributors, and bring this to the surface in P2 Live Forums.

    Thank you for your involvement here @lozinski. Maybe you could tell us a bit about yourself, how you found the P2 and MicroPython, and your interests. Welcome aboard!

    Ken Gracey
    Parallax Inc.

  • TubularTubular Posts: 4,378

    Hi Lozinski there'll be another big push on MP in a couple of months, as the hardware we're building gets done, and we hope to get synced up with mainline again.

    The MicroPython team have recently made a big effort to get releases out every couple on months and they seem to be achieving this with MP1.16 just released in the past week. I'd like to get everything synced up again on 1.17 if possible.

    Right now we need to focus on the hardware but this will soon change. In the mean time the areas that spring to mind are working on & testing the LittleFS vs code that should be possible to get working. We took a different direction in getting OzFFS (uses Flash IC to store) up and running, but we should be able to get LittleFS working alongside that

    MicroPython have been working on making SoftSPI and SoftI2C and others consistent across the various ports. We need to think about what this means for smart pins because perhaps the soft libraries will become very standard and the preferred way of implementing things.

    Thanks for getting in contact

  • You mentioned release 1,16. I only see release 1.13.
    Okay, so I must be looking at the wrong repository.

    What is the correct repository? Maybe that will have a list of issues?
    Warm Regards
    Christopher Lozinski

  • roglohrogloh Posts: 3,392

    It's the right repo, v1.13 was ported while MP has continued to move on to later versions. You are one of the few people showing interest in it. You need to look at the p2native branch too.

  • So I updated the ReadMe file with the current status. It now says:

    This version (1.13) is the currently used version of MicroPython on the Parallax Propeller P2. Since we released it on Dec 13, 2020, there have been a rapid number of new releases of the main upstream MicroPython repository. We are plan on merging them, but right now our attention is on connecting peripherals. There is useful code for Soft I2C and SPI testing in the P2Native branch.

    If you have questions, please post them on Parallax Forum

    I hope that I got that right.

  • If anyone is interested, here is the link to LittleFS.

    Sadly not of my interest.

  • TubularTubular Posts: 4,378

    Thanks Chris

    What areas are you interested in?

  • Thank you for asking. It helped focus my thoughts.
    I am still learning about this processor, and reading extensively.

    I am very interested in Python performance on this innovative microcontroller.
    Since I am new to this cpu, I will probably get this slightly wrong, but here is what I think happens.

    In a traditional microcontroller the cpu asks for the next instruction and executes it. Maybe there is some pipelining. Here the cog has to request a bunch of code instructions, (There is an assembly instruction for that)
    wait for it, copy that code into cog registers (Did I get that right?) , call that the code cache, and only then execute it. I wonder how much delay that introduces? How often is there a cache hit or miss? That depends on how many C functions fit in the cog RAM. And it depends on how many C functions there are overall. Specifically how big is the C executable for executing each Python Byte Code (or library function). How many fit into Cog RAM? How often is there a hit or a miss on the C code to be executed? Do the C compilers even remember which code is in the cache, or do they fetch it afresh each time?

    So which C compiler do you recommend?
    One more set of documentation to read.

    And again, I am sure that I got the above description partially wrong. Forgive me, I am still learning.

  • roglohrogloh Posts: 3,392
    edited 2021-06-29 15:09

    The P2 execution model used by this native MicroPython port makes use of the HUB execution capability in the P2. P2 native instructions will be fetched from HUB RAM and get executed by the COG automatically in HUB exec mode. There is not really a cache used here, just a small FIFO to read short bursts of instruction sequences to be executed before the next branch occurs.

    We are using P2GCC tools created by Dave Hein with some customizations/tweaks I added on top, and this is all included as part of the Github repo. During the build process the MicroPython interpreter's source code is compiled into P1 assembly code via the older GCC toolchain for the P1. It is then dynamically translated into P2 assembly code (most of the PASM instructions GCC for P1 generates are very similar to PASM2) and fed to Dave Hein's P2 assembler and linker to build the MicroPython image.

    With the default build configuration settings on the P2 we typically have about a 96-128kB heap which is used for holding Python source/byte code and other dynamic Python run time variables while the MicroPython interpreter executes. The remainder of HUB RAM is taken up for the MicroPython executable P2 image, the stack, and a variety of other included drivers and other features that ozpropdev developed.

  • Excellent answer.
    Thank you.

    Here is the documentation about the Hub Execution Mode. It is lower on the page.

    I even made some edits to that page. I hope I got it right. A great way to contribute to open source is to start by editing documents.

  • lozinskilozinski Posts: 70
    edited 2021-06-30 16:39

    @ersmith made the following very interesting comment in another thread.

    XBYTE is wonderful for running simple bytecode like the Spin interpreter, or for CPU emulation of 8 bit systems. But it would be of very little value for python bytecode, or any other dynamically typed language (like JavaScript), because the instruction dispatch depends on the data types at run time. XBYTE would save a little bit of time in instruction decode, but the vast majority of the time in a python bytecode interpreter is spent elsewhere.

    Which brings me back to my original question? Where is the performance hit? Of course implementing all of the Python Bytecodes in Parallax Assembly would be a major pain. Reportedly function calls are the slowest. My guess now is that after that, on this platform, the slowest would be branching instructions. Because they have to wait to get the new set of instructions. I can see why the skip command is so useful. If you can embed the optional branch in a skip, particular in cog memory, no problem.

    I am still learning. Very interesting chip. Assembly programming is so different from Python programming.

    Asking questions about performance is a very good thing to do, because it forces me to understand the system. I am never sure what I am going to do next, but I think it might be very interesting to look at the Assembly code for the Byte Code instructions. Maybe first I will read the C code. I wonder how different it is from the cPython Byte Code file.

  • The performance hit is in the dynamic nature of Python. You are dispatching based on a name, and then run down a list of namespaces to find the actual thing to return. Not much can be done about this.

    There is also little value in trying to port this byte code core of the uPy VM to P2ASM, as the value of this overall effort is in keeping up with the upstream uPy implementation. Because you want any progress there to be reflected.

    The port is about using p2gcc, and adapting the interpreter to the loading and execution environment of the P2, and of course the HAL.

    IMHO one of the more interesting aspects is on how to expose the facilities of the P2 such as smartpins and cogs to uPy. It will be difficult (if not outright impossible) to parallelize uPy itself (it isn’t on the ESP32 either), but a nice way of spinning up spin or pasm drivers and communicate with them is certainly possible.

  • roglohrogloh Posts: 3,392

    @deets said:
    IMHO one of the more interesting aspects is on how to expose the facilities of the P2 such as smartpins and cogs to uPy. It will be difficult (if not outright impossible) to parallelize uPy itself (it isn’t on the ESP32 either), but a nice way of spinning up spin or pasm drivers and communicate with them is certainly possible.

    Yes this is still some fertile ground. We do have some COG init spawning added in there that Eric had put together and I later ported to P2 native MicroPython 1.13. I also added some Smartpin control (trying to find a balance between existing P2 APIs and the MicroPython way of doing things - tricky), and soft SPI and i2c after that. There are still other missing pieces (ADC/DAC) and no doubt new 1.14-1.17 features. MicroPython itself is trying to abstract the HW which helps. Earlier versions had different ways/APIs to do things in different ports which was messy but that seems to be getting cleaned up more over time.

  • The following is posted in two separate threads.

    If I have to run an ESP32 for Wifi Access, I can just load MicroPython on it, load MQTT on top of that, and speak to the primary cog, say a Spin cog over spi. Which can route the messages to and from the appropriate cogs.

    Does that sound reasonable? It has several advantages.
    Mainstream version of MicroPython and MQTT.
    Python will run faster than on the P2.
    No waiting for Round Robin memory access.
    Get all of the advantages of the Smart pins.
    All sounds quite easy to do.

    Am I missing something?

  • JonnyMacJonnyMac Posts: 7,649

    That's an interesting approach. I wonder... does the ESP-01 module have enough resources to run MicroPython? I ask, because I have idea for a custom P2 DEF COON badge for 2022, and I'd like to put a socket on that board for an ESP-01 to serve as a WiFi bridge for the P2.

  • __deets____deets__ Posts: 87
    edited 2021-07-01 16:03

    I’m skeptical about your assumption that the uPy runs faster. The XTensa architecture has its own peculiarities.

    In General though it’s a good approach playing to the strengths of the individual platform. I am using ESP32 and P1 via SPI connected to it. I don’t use uPy in this instance for various reasons, but there is no reason it shouldn’t work.

    @JonnyMac the ESP-01 seems to be a ESP8266 under the hood. That runs uPy. See

  • JonnyMacJonnyMac Posts: 7,649
    edited 2021-07-01 16:49

    I am using ESP32 and P1 via SPI connected to it.

    Do you have this publicly documented? If yes, would you mind sharing a link (I looked through your forum comments, but didn't see anything)? My networking experience is mostly RS-485, so this is new to me and I have a lot to learn. Thanks.

    I just bought a little USB programming adapter for the ESP-01; I'll see if I can get MicroPython to load into it.

  • lozinskilozinski Posts: 70
    edited 2021-07-01 17:08

    It is a good point that it may run slower. I had not thought of that. These forums are so helpful.

    As long as it runs Python and MQTT, I am a happy man. And even if that chip is not so good, there will be others that will run all of the Python libraries,and talk to the P2. This solves a Huge problem about lack of mainstream libraries on these specialty chips. We just need to connect the two different chips. And it also provides effectively more memory to the P2. Hurrah!

    I have the a similar question as Jon McPhalen. How does SPI work?
    Here is he wikipedia page on SPI.
    I guess all I really need to know is there a library in Taqoz or spin your recommend for controlling the spi interface? What about on the ESP32 end?
    Here is the MicroPython library for SPI.

    And here is how to actually use spi in Python.

    Of course from all of that it is still not clear to me. Can either side send anything hey want to at the same time? How do I know when one message is complete, and he next one starts? How do i distinguish beteen an empty message and no message.

    Oh well , more to read. Here is some more Python examples.

    All seems quite low level to me. I wish I could just send a message to a proxy object, and have it forwarded to the appropriate object on the other side of the connection. Oh well.


  • @JonnyMac

    I’m happy to share the ESP/P1 project, but it’s on a private work GitHub repository. I can share that (it’s what we call a hack project), but I need to publish it first. I’ll ping you once that’s done, I plan to work on this next week anyways.

  • Okay, so SPI is a bit low level. Here is a great article.

    Perfect for sensors, the master knows how much data it is getting. But does not work in my case,
    where the amount of data going both ways is quite changeable. Sigh. Maybe I am missing something.

  • __deets____deets__ Posts: 87
    edited 2021-07-01 17:36

    That’s a wrong perception. This is a project similar to the one I alluded to earlier, and here is the SPI code.

    It establishes a protocol in which the controller (a Pi in this case, but it could be a ESP) creates one transaction which contains as part of the answer how many bytes to read. And does that then in a second operation. You can also use different protocols with fixed size and then signify via a bit if there is more data to come. Etc.

    In fact most protocols work that way. Ethernet always has just 64 bytes / frame. UART sends one datagram of N bits. It’s the problem of the overarching protocol to stitch messages together.

    Speaking of UART: one can also just use that. The advantage in this case is the event driven nature - the ESP gets an IRQ if there is data. Instead of polling.

  • Awesome help. Thank you.
    Everyone here has been so helpful.
    Uart sounds simpler. You have some data, send it!
    I am okay if it is slower. I care more about development speed than run-times speed.

  • @JonnyMac I just wanted to follow up on this as you asked for my ESP32 code. Unfortunately I can't share the full repo yet, but I can share the ESP SPI communication and the P1 code. It is C++ on the ESP side, though. I love micropython but in this project I can't use it because I need a C++-only dependency (Ableton Link Clock Sync protocol).

    However the micropython SPI driver is simple to use, and with you should achieve the same thing.

    If you have any questions, don't hesitate to ask. I omitted the FullDuplexSerial.spin for obvious reasons ;)

Sign In or Register to comment.