Shop OBEX P1 Docs P2 Docs Learn Events
Micropython for P2 - Page 6 — Parallax Forums

Micropython for P2

13468913

Comments

  • Thanks for doing this testing, Roger. There was a serious flaw in the riscvp2 instruction cache (as a direct mapped cache if there was a cache line conflict, as there was in the factorial test, performance would really crater). I've done a quick hack to fix that in my current github repository. The results now look like:
    >>> run()
    Testing 1 additions per loop over 10s
    Count:  385330
    Count:  385336
    Count:  385319
    Testing 10! calculations per loop over 10s
    Count:  12923
    Count:  12922
    Count:  12923
    Testing sqrt calculations per loop over 10s
    Count:  144861
    Count:  145719
    Count:  145613
    
    The last test is a new one to test floating point performance. I'm curious as to how it will compare -- I know we spent quite a lot of time optimizing the floating point code in PropGCC, so presumably p2gcc will do quite well on this. The source code for the test is:
    import math
    def perfTest5():
      sqrt = math.sqrt
      millis = pyb.millis
      endTime = millis() + 10000
      count = 0
      while millis() < endTime:
        count += 1
        sqrt(count)
      print("Count: ", count)
    

    We also need to consider the trade-offs between features, memory, and performance. The current v17 version of my upython has about 230K available for the user:
    >>> import gc
    >>> gc.collect()
    >>> gc.mem_free()
    231616
    >>> help('modules')
    __main__          heapq             re                uio
    array             io                sys               ujson
    binascii          json              ubinascii         uos
    builtins          math              ucollections      ure
    collections       micropython       uctypes           ustruct
    gc                os                uhashlib          uzlib
    hashlib           pyb               uheapq            zlib
    Plus any modules on the filesystem
    >>> 
    
    There's also a 16K text buffer for VGA; we could remove that and the 8K or so of VGA code for applications that don't need a screen.
  • jmgjmg Posts: 15,140
    ersmith wrote: »
    Thanks for doing this testing, Roger. There was a serious flaw in the riscvp2 instruction cache (as a direct mapped cache if there was a cache line conflict, as there was in the factorial test, performance would really crater). I've done a quick hack to fix that in my current github repository. The results now look like:
    >>> run()
    Testing 1 additions per loop over 10s
    Count:  385330
    Count:  385336
    Count:  385319
    Testing 10! calculations per loop over 10s
    Count:  12923
    Count:  12922
    Count:  12923
    Testing sqrt calculations per loop over 10s
    Count:  144861
    Count:  145719
    Count:  145613
    

    That's quite a gain, and it interesting how similar these are coming in

    Here is the same code on a PC, using monotonic as timer
    # Testing 1 additions
    # Count1:  91410591
    # Count1:  92121975
    # Count1:  92681136
    # Testing 2 additions
    # Count2:  60682219
    # Count2:  60478791
    # Count2:  60770668
    # Testing 3 additions
    # Count3:  48244784
    # Count3:  48324632
    # Count3:  48145277
    # Testing Factorial(10)
    # CountF:  4976026
    # CountF:  4977141
    # CountF:  4976896
    # Testing Sqrt(count)
    # CountS:  35193194
    # CountS:  35354701
    # CountS:  35139707
    PC factorial  4976026/91410591   5.44%
    P2 factorial  12923/385330       3.35%
    PC Sqrt      35193194/91410591  38.500%
    P2 Sqrt       144861/385330     37.594%
    
    Ratios are fairly similar to PCs, with P2 sqrt looking quite good, given there is no FPU.


  • roglohrogloh Posts: 5,122
    edited 2019-07-20 04:48
    @ersmith, yes I wanted to try to get floating point added to my build but I can't seem to link it as it doesn't appear that the p2gcc library is fully complete yet when it comes to floating point support.
    After attempting linking Micropython code with the longlong.o, libm.o and float.o modules, I still have these missing symbols so I can't complete that sqrt() test.

    Not sure if @"Dave Hein" knows if these things were still needing to be ported from P1 or didn't apply to this version of the library for P2 or how the state of floating port support was left there...?

    ___extendsfdf2 is unresolved
    ___floatsidf is unresolved
    ___floatunsisf is unresolved
    ___fpclassifyf is unresolved
    ___isinff is unresolved
    ___isnanf is unresolved
    ___truncdfsf2 is unresolved
    ___unordsf2 is unresolved
    _atan2f is unresolved
    _ceilf is unresolved
    _cexpf is unresolved
    _expf is unresolved
    _fmodf is unresolved
    _frexpf is unresolved
    _ldexpf is unresolved
    _logf is unresolved
    _modff is unresolved
    _nearbyintf is unresolved
    _powf is unresolved
    _sqrtf is unresolved
    _truncf is unresolved
    
    We also need to consider the trade-offs between features, memory, and performance. The current v17 version of my upython has about 230K available for the user:

    Agree... in many cases if people wanted the smallest footprint Micropython with all the bells and whistles while still leaving a lot of room for a heap I think the compressed C code via RISC-V approach may suit their needs nicely. Certainly for end users in education etc who don't necessarily need to understand how these applications work underneath, they just want as fully featured a Python environment as they can get with as much space left for their own Python code. It looks like this RISC-V based performance in the benchmark comparisons done to date shows it to not to be all that much slower than purely native code either, though that testing is not very extensive. People will just take what they can get so the highest performance won't generally be of interest there anyway.

    However I suspect there also may be other situations (and not necessarily for Micropython) where you might be desiring to use C with native P2 code and can sacrifice some extra hub space if you need:
    - more consistent execution timing independent of prior code paths taken and internal caching algorithms
    - to be able to debug entirely in native P2 assembly code without needing to go via any intermediate step of RISC-V or understand the intricacies of the JIT engine behaviour and its caching etc. To me at least, that would be quite a turnoff if I need to go learn another instruction set just to debug my own C code on a P2 once things need to get down to a really low level. It's nice to just remain working in PASM2 and be able to disassemble code anywhere in hub memory and not need extra ISA translation tools for this, etc.
    - "in theory" the fastest performing code, though it looks like your combination of RISC V and its (likely superior) optimised code generation with GCC is actually pretty decent too if the code doesn't branch excessively and the caching hits most of the time.

    So in my view there is probably room and a desire for continuing further with both these compiler approaches for now. In my opinion it may look bad to some if the only GCC toolchain for P2 supported by Parallax forced you to have to go via a RISC-V instruction set. It's just not a good look and if you were Parallax you'd probably want to hide that away from customers, or questions will be asked. If it's only a temporary step on the path to fully native P2 GCC port the maybe people could understand the reasoning for this but if it remains like that long term, it's a problem.

    Cheers,
    Roger.
  • David BetzDavid Betz Posts: 14,511
    edited 2019-07-20 11:43
    Have you tried compiling MicroPython with Catalina or fastspin/C? They generate native P2 code without having to go through a translation from P1 to P2. Also, they're being actively developed so should be improving.
  • rogloh wrote: »
    However I suspect there also may be other situations (and not necessarily for Micropython) where you might be desiring to use C with native P2 code and can sacrifice some extra hub space if you need:
    - more consistent execution timing independent of prior code paths taken and internal caching algorithms
    - to be able to debug entirely in native P2 assembly code without needing to go via any intermediate step of RISC-V or understand the intricacies of the JIT engine behaviour and its caching etc. To me at least, that would be quite a turnoff if I need to go learn another instruction set just to debug my own C code on a P2 once things need to get down to a really low level. It's nice to just remain working in PASM2 and be able to disassemble code anywhere in hub memory and not need extra ISA translation tools for this, etc.
    True enough -- I don't think that riscvp2 is a replacement for a true "native" compiler. But unfortunately all of the C compilers we have now for P2 have compromises:

    - riscvp2 uses RISC-V assembly language (augmented with P2 instructions, but still)
    - p2gcc is based on an older GCC and its libraries, linker, etc. are still incomplete; plus the compiler targets P1 rather than P2
    - Catalina is pretty complete, but it's only C89 and based on LCC, which doesn't perform especially well (Ross does have an optimizer which seems to help with this)
    - fastspin's C support is C99 but still in alpha stage
    In my opinion it may look bad to some if the only GCC toolchain for P2 supported by Parallax forced you to have to go via a RISC-V instruction set. It's just not a good look and if you were Parallax you'd probably want to hide that away from customers, or questions will be asked. If it's only a temporary step on the path to fully native P2 GCC port the maybe people could understand the reasoning for this but if it remains like that long term, it's a problem.
    I haven't heard anything about Parallax supporting any compiler for the P2 other than Chip's Spin compiler. I'm not sure what their long term strategy is. At one point they were going to call a tools meeting, but I haven't heard anything about that, so we'll have to see. Until then we're on our own, and I guess we all work on whatever interests us. For me, the RISC-V toolchain is attractive because it is complete (libraries, linker, and all) and if the P2 for some reason doesn't arrive or doesn't gain traction then at least I'll have some experience with an instruction set that is going to be widely supported.

    Frankly I think that for P3 Chip would be wise to use a standard instruction set like RISC-V so that Parallax can use off-the-shelf solutions for tools, with some minor work to augment them for the custom instructions.

    Regards,
    Eric
  • If you ask me, Parallax will most likely ramp up Spin 2 with Prop Tool first. Easy win + large userbase who will jump on that, because it's familiar and effective. That means some revenue, moving chips right away. And all that is just a slot in too. Easy peasy. I also suspect having some revenue moving will help fund or at least make the decisions related to C and pro type tools in general. Parallax is bootstrapping pretty much anything that they choose to do.

    I have always thought that was important to consider.

    All the stuff we are doing will get some traction out there. (and that's you guys, not me as I've been out for a while due to professional commits, but you get the idea here. )

    We probably will have a meetup. (sure hope so, it has been way too darn long since that happened) There will be conversations. There will also be conversations the moment Parallax knows this is a for sure, no more revisions, GO too. So close. Fingers crossed that Chip nailed it. (my bets are he did)

    After that, it's game on for other efforts. They've got Blockly to think about too.

    "Work on whatever interests us"

    Yup. That is precisely how they would have it go too. It's good stuff.

  • jmgjmg Posts: 15,140
    ersmith wrote: »
    Frankly I think that for P3 Chip would be wise to use a standard instruction set like RISC-V so that Parallax can use off-the-shelf solutions for tools, with some minor work to augment them for the custom instructions.
    Yes, that would be looked at very closely in any P3.
    That's also what makes RISV-V emulation on P2 so appealing, any tuning to the emulation engine benefits ALL RISC-V tool flows, and it's quite impressive how well the emulation path works.

    With Python working on P2, Parallax may also consider a Python to Spin translator, so that early proof of concept code broadly working on Python, can be moved to Spin.

    Does Spin 2 support Floating point ?
  • jmg wrote: »
    ersmith wrote: »
    Frankly I think that for P3 Chip would be wise to use a standard instruction set like RISC-V so that Parallax can use off-the-shelf solutions for tools, with some minor work to augment them for the custom instructions.
    Yes, that would be looked at very closely in any P3.
    That's also what makes RISV-V emulation on P2 so appealing, any tuning to the emulation engine benefits ALL RISC-V tool flows, and it's quite impressive how well the emulation path works.
    Thanks. Yes, one of my purposes in doing the RISC-V emulation was to show how most of the P2 instructions can map pretty easily to RISC-V custom instructions. The RISC-V opcode space has slots reserved for doing just that, and so we can invent instructions like:
       drv  val, OFFSET(basepin)
    
    which copies the low bit of "val" into the pin whose value is in "basepin" (offset by the immediate value): so for example you can set up the base pin in a register and then easily access the pins BASE+0, BASE+1, and so on. That's actually a little more powerful than the P2 instruction (so sometimes needs 2 P2 instructions to implement) but would be a nice candidate for a P3.
    With Python working on P2, Parallax may also consider a Python to Spin translator, so that early proof of concept code broadly working on Python, can be moved to Spin.
    I think that would be quite tricky -- Python is a higher level language than Spin, and capturing the intricacies of object inheritence and general class dispatch would be difficult in Spin. Going the other way (from Spin to Python) should be feasible though.
    Does Spin 2 support Floating point ?
    I think it does so in the same way as Spin 1, namely with extra objects.

  • I'd posted this result in the PropGCC thread but it probably belongs here for people reading this whole Micropython thread later. An optimization was added to convert reading fixed addresses from hub memory into the registers, to just loading the constant 32 bit address directly to the register like this "mov r1, ##address".

    Results with this optimization added to the p2gcc build of Micropython:
    Testing 1 additions per loop over 10s
    Count:  409829
    Count:  409804
    Count:  409803
    Testing 2 additions per loop over 10s
    Count:  310543
    Count:  310535
    Count:  310534
    Testing 3 additions per loop over 10s
    Count:  246596
    Count:  246590
    Count:  246590
    Testing 10! calculations per loop over 10s
    Count:  15729
    Count:  15730
    Count:  15729
    
  • roglohrogloh Posts: 5,122
    edited 2019-07-24 10:08
    In some further work, I've been able to integrate the SD card support from Eric's RISC-V based Micropython version into my own native P2 version and add in more missing modules. For comparison this native P2 executable image with these extra features is now taking up 285kB, leaving ~226kB for the stack and heap, so probably realistically around 200kB or so left for a Python heap giving a little bit spare for a couple of extra features to be added...

    The main things the P2 native version lacks over the RISC-V version are:
    - floating point :frown:
    - USB KBD/Mouse cog
    - Video cog
    It also obviously takes up a bit more room than Eric's more recent version so ultimately you also would lose a bit of heap, maybe ~30-40kB less. With a better GCC code emitter that is fully optimized for the P2 and with more registers available I would expect the code size for P2 native could ultimately come down a bit, hopefully somewhere in the order of 10% smaller, perhaps even more than this if there are a lot of local variable accesses. I know each local stack variable read or write can potentially save 8 bytes if a simpler "rdlong x, sp[offset]" format can be used there each time. There may be other things we discover that the P2 instruction set can do faster and with fewer instructions than the existing P1 sequences too.

    Perhaps some of the keyboard and mouse and video stuff might eventually become more dynamic Python module objects that you just import and use if/when you need them, either read from flash or SD and therefore may not need to be permanently baked in with any luck, however the floating point code is the bigger issue here. Because if you need floating point, as of right now there is no way to get it into the native P2 version without more work done to the GCC math libraries for P1/P2. In some digging about I found that Micropython itself can supply some of C code for the math functions but it still depends on some underlying stuff that appears missing in the P1/P2 libraries when I tried to compile it.

    RISC-V: (v15 tested)
    MicroPython eric_v15 on 2019-07-14; P2-Eval-Board with p2-cpu
    Type "help()" for more information.
    >>> help('modules')
    __main__          heapq             re                uio
    array             io                sys               ujson
    binascii          json              ubinascii         uos
    builtins          math              ucollections      ure
    collections       micropython       uctypes           ustruct
    gc                os                uhashlib          uzlib
    hashlib           pyb               uheapq            zlib
    >>> import gc
    >>> gc.mem_free()
    201344
    

    Native P2:
    MicroPython v1.11-105-gef00048fe-dirty on 2019-07-24; P2-EVAL with propeller2-cpu
    Type "help()" for more information.
    >>> help('modules')
    __main__          hashlib           re                uio
    array             heapq             sys               ujson
    binascii          io                ubinascii         uos
    builtins          json              ucollections      ure
    collections       micropython       uctypes           ustruct
    frozentest        os                uhashlib          uzlib
    gc                pyb               uheapq            zlib
    >>> import gc
    >>> gc.mem_free()
    217232
    
  • rogloh wrote: »
    RISC-V: (v15 tested)
    Roger, I posted the v17 result up thread if you didn't want to run it yourself:
    >>> gc.mem_free()
    231616
    
    That's with the 16K VGA screen buffer, VGA and USB code, and math code.

    Also, it looks like you're using the 16K debug buffer at the top of RAM ($7C000-$7FFFF)? I've left that alone for now.

    I'm looking into pulling the VGA and USB out into modules, but the most natural way to do that is with ELF object files. Which we had in P1, but don't have yet in p2gcc. Another reason it would be good for Parallax to support a GCC or at least binutils port to the P2.
  • Today I was able to cobble together enough of the additional pieces required for Micropython's floating point support to compile on the P2 so I could test the square root performance and see how much extra space it all consumed which was the main thing that I wanted to know.

    I found even once you used Micropython's own maths routines the code still needed to link to some inbuilt functions __isnanf(),__isinff(), __floatunsisf() and __unordsf2(). These are normally provided by libgcc.a but unfortunately this fundamental library is not present on the p2gcc port. I found samples for these functions elsewhere and just hacked them in to get it compile. It also needed cexpf() for doing the complex math which was pulled in too. Nothing was really properly tested here, just enough done to compile and run.

    For the sqrt() benchmark test above which counts the number of square roots done in 10 seconds including a counter increment, the p2gcc code now gets these results (with some garbage collection firing up during the test):

    Count: 179776
    Count: 179790
    Count: 179363

    For reference these extra floating point math and complex math modules in Micropython plus all the floating point library routines that they need added about 38kB more to my previous image. So there is only 188kB free hub memory left now in this build which has to be shared between the python heap, a stack, and any debug/ high hub memory preservation or other missing things from Micropython etc. Note that this build enabled single precision only, and without the additional complex math stuff included I found it can shave 4700 bytes from that 38kB.

    I suspect this 188-192kB of hub memory probably doesn't leave too much space for any larger python applications, and we'd all be hoping for a smaller footprint. RISC-V looks far better there, however Eric and I might agree that right now without a full P2 GCC compiler available to compile it neither of these build scenarios are especially attractive for getting Micropython onto the P2. It can be done but it looks like either a large hub memory user and moderate performer using p2gcc's P1 to P2 translation, or a more compact footprint if slightly slower and harder to debug setup via RISC-V+JIT P2 engine. I guess we ultimately just desire a proper GCC implementation for the P2.

    Roger.
    MicroPython v1.11-105-gef00048fe-dirty on 2019-07-25; P2-EVAL with propeller2-cpu
    Type "help()" for more information.
    >>> import math
    >>> import cmath
    >>> x = 23.4
    >>> x
    23.4
    >>> x * 18.93
    442.962
    >>> math.sin(1.23)
    0.9424888
    >>> math.pow(2.3,5)
    64.36343
    >>> x = 1+0.5j
    >>> x
    (1+0.5j)
    >>> y = 2-0.1j
    >>> y
    (2-0.1j)
    >>> x * y
    (2.05+0.9j)
    >>> x + y
    (3+0.4j)
    >>> math.sqrt(23)
    4.795832
    >>> help('modules')
    __main__          hashlib           re                ujson
    array             heapq             sys               uos
    binascii          io                ubinascii         ure
    builtins          json              ucollections      ustruct
    cmath             math              uctypes           uzlib
    collections       micropython       uhashlib          zlib
    frozentest        os                uheapq
    gc                pyb               uio
    Plus any modules on the filesystem
    >>>
    
  • rogloh wrote: »
    I guess we ultimately just desire a proper GCC implementation for the P2.

    I agree. Having a proper full featured compiler suite for P2 (including binutils) would be very useful and would give us some better options for micropython. GCC is the most obvious choice, but LLVM would probably work too.
  • ersmith wrote: »
    rogloh wrote: »
    I guess we ultimately just desire a proper GCC implementation for the P2.

    I agree. Having a proper full featured compiler suite for P2 (including binutils) would be very useful and would give us some better options for micropython. GCC is the most obvious choice, but LLVM would probably work too.
    Could you say that a bit louder so Parallax hears? :smile:

  • I've looked into LLVM backends once, it doesn't seem to hard too get a simple one going. There's also more interesting frontends for it than for GCC.
  • Wuerfel_21 wrote: »
    I've looked into LLVM backends once, it doesn't seem to hard too get a simple one going. There's also more interesting frontends for it than for GCC.
    Are you volunteering? :smile:

  • If only I had a P2, I'd give it a try.
    Speaking of which, if someone were to create a P2 emulator, that'd make tool development significantly easier (and enable automated regression testing)
  • Wuerfel_21 wrote: »
    If only I had a P2, I'd give it a try.
    Speaking of which, if someone were to create a P2 emulator, that'd make tool development significantly easier (and enable automated regression testing)
    I think Dave Hein already did that. Also, if you are volunteering to do LLVM for the P2 I would guess that Parallax might find a way to get you a P2 especially after the P2v2 boards become available.

  • Problem is that I'm notoriously unreliable. Would be unfair if they were to get me a P2 and then I don't deliver aynthing.
  • Has anyone tried compiling MicroPython with Catalina C? How is that performance?
  • David Betz wrote: »
    Has anyone tried compiling MicroPython with Catalina C? How is that performance?

    IIRC MicroPython uses some C99 features which Catalina doesn't support. fastspin supports C99, but I seriously doubt it's mature enough to compile MicroPython yet (I haven't tried it though).

  • kwinnkwinn Posts: 8,697
    edited 2019-08-08 18:11
    ersmith wrote: »
    David Betz wrote: »
    Has anyone tried compiling MicroPython with Catalina C? How is that performance?

    IIRC MicroPython uses some C99 features which Catalina doesn't support. fastspin supports C99, but I seriously doubt it's mature enough to compile MicroPython yet (I haven't tried it though).

    Only one way to find out! It might also be a good way to see where it needs some work. I am amazed at what you have accomplished so far with FastSpin, and am pretty sure you could add MicroPython to the list if you decided to do so.

    PS, after mulling over my previous post regarding changing the name of FastSpin may I suggest "EcOs" since having a single compiler that can integrate multiple languages into a single program truly makes it something of a programming ecosystem.
  • David BetzDavid Betz Posts: 14,511
    edited 2019-08-08 18:21
    kwinn wrote: »
    ersmith wrote: »
    David Betz wrote: »
    Has anyone tried compiling MicroPython with Catalina C? How is that performance?

    IIRC MicroPython uses some C99 features which Catalina doesn't support. fastspin supports C99, but I seriously doubt it's mature enough to compile MicroPython yet (I haven't tried it though).

    Only one way to find out! It might also be a good way to see where it needs some work. I am amazed at what you have accomplished so far with FastSpin, and am pretty sure you could add MicroPython to the list if you decided to do so.

    PS, after mulling over my previous post regarding changing the name of FastSpin may I suggest "EcOs" since having a single compiler that can integrate multiple languages into a single program truly makes it something of a programming ecosystem.
    I started looking at compiling MicroPython with fastspin a while ago and it looks like it will require rewriting the Makefile since fastspin compiles all of the files in a single step and that doesn't fit the normal GCC compile/link model.

    Maybe this could be hacked though. It might be possible to define CC to just copy the .c files to the object directory and have fastspin be used as the linker rather than as the compiler. This might not work though if part of the MicroPython build process is to create libraries before linking them together to form an executable.

    How much hacking of the build process was needed to get it to compile with p2gcc?
  • jmgjmg Posts: 15,140
    kwinn wrote: »
    PS, after mulling over my previous post regarding changing the name of FastSpin may I suggest "EcOs" since having a single compiler that can integrate multiple languages into a single program truly makes it something of a programming ecosystem.
    ecosystem is good, but 'Os' sounds too much like 'Operating System' in RTOS

  • David Betz wrote: »
    How much hacking of the build process was needed to get it to compile with p2gcc?

    Some hacking was required. From memory, changes could be contained within the build's Makefile because P2GCC behaves closer to GCC than other tools (part of it is GCC), though its linker was different.
  • kwinn wrote: »
    PS, after mulling over my previous post regarding changing the name of FastSpin may I suggest "EcOs" since having a single compiler that can integrate multiple languages into a single program truly makes it something of a programming ecosystem.

    Wouldn't that sort of Smile redhat's Redboot team off?
    https://en.wikipedia.org/wiki/ECos
  • jmg wrote: »
    kwinn wrote: »
    PS, after mulling over my previous post regarding changing the name of FastSpin may I suggest "EcOs" since having a single compiler that can integrate multiple languages into a single program truly makes it something of a programming ecosystem.
    ecosystem is good, but 'Os' sounds too much like 'Operating System' in RTOS

    Does not really belong in this thread, but I think that most people are already used to FastSpin. So why not just keep the status quo and call it:
    Fast Compiler Set/Suite containing FastSpin, FastBasic and FastC . If you want shorter cmdline names you can use fsc, fbc and fcc
  • kwinnkwinn Posts: 8,697
    kamilion wrote: »
    kwinn wrote: »
    PS, after mulling over my previous post regarding changing the name of FastSpin may I suggest "EcOs" since having a single compiler that can integrate multiple languages into a single program truly makes it something of a programming ecosystem.

    Wouldn't that sort of Smile redhat's Redboot team off?
    https://en.wikipedia.org/wiki/ECos

    Oops, I suppose it would. I should have done a more thorough search before suggesting that.

    @rosco_pc Good suggestion.
  • I've posted a v18 binary in the first post. This one has some more compiler improvements, which should make it compatible with the new silicon (the old riscvp2 compiler used some features that have changed).

    Current benchmark results are:
    Testing 1 additions per loop over 10s
    Count:  387572
    Count:  387558
    Count:  389104
    Testing 2 additions per loop over 10s
    Count:  295832
    Count:  295830
    Count:  299384
    Testing 3 additions per loop over 10s
    Count:  238646
    Count:  238640
    Count:  237507
    Testing 10! calculations per loop over 10s
    Count:  13033
    Count:  13034
    Count:  13025
    Testing sqrt calculations per loop over 10s
    Count:  172535
    Count:  172484
    Count:  170822
    

    Free memory on a freshly booted system (and no SD card inserted):
    import gc
    >>> gc.collect()
    >>> gc.mem_free()
    231152
    
  • Sounds great! I will have to try this. I've just been doing some Python programming and find I like it better than I thought I would. Maybe working with Spin for a while has given me a higher tolerance for languages that do block structure through indentation! :smile:
Sign In or Register to comment.