Shop OBEX P1 Docs P2 Docs Learn Events
Catalina - ANSI C and Lua for the Propeller 1 & 2 - Page 16 — Parallax Forums

Catalina - ANSI C and Lua for the Propeller 1 & 2

1101112131416»

Comments

  • RossHRossH Posts: 5,476

    @Wuerfel_21 said:
    On Windows NT, especially if you have any sort of security software enabled, spawning a new process and attaching it to the console subsystem is comically slow. Reading directory information is also slow. This can combine into a make+gcc style build taking orders of magnitude longer than reasonable.

    Yes, disabling as much Windows security as possible does indeed speed things up - but not enough. Also, there are good reasons why you absolutely want as much security enabled in Windows as you can get (and then go add some more!) so it is not something users should have to do.

    Using MinGW in GH Actions is perfectly fine (assuming use as a cross-compiler on a Linux action runner), I set it up for flexspin/spin2cpp.

    I may be missing something here. Why install MinGW on Linux and cross-compile? Why not just use a Windows action runner? Because it would be too slow?

    The real big thing with actions IMO is that you can use it to auto-run any test scripts you have. For flexspin I made a modified version of spinsim, so P1 code can be tested end-to-end.

    Yes, I can see that would be useful. It would have saved me from a few embarrassing releases :)

    I know there was a P2 version of spinsim - do you use that?

    Ross.

  • ElectrodudeElectrodude Posts: 1,660
    edited 2024-11-10 11:01

    @RossH said:

    @Wuerfel_21 said:
    On Windows NT, especially if you have any sort of security software enabled, spawning a new process and attaching it to the console subsystem is comically slow. Reading directory information is also slow. This can combine into a make+gcc style build taking orders of magnitude longer than reasonable.

    Yes, disabling as much Windows security as possible does indeed speed things up - but not enough. Also, there are good reasons why you absolutely want as much security enabled in Windows as you can get (and then go add some more!) so it is not something users should have to do.

    Installing security software isn't something anyone should ever have to do. The OS should already be inherently secure. If it isn't, it isn't a serious OS.

    Using MinGW in GH Actions is perfectly fine (assuming use as a cross-compiler on a Linux action runner), I set it up for flexspin/spin2cpp.

    I may be missing something here. Why install MinGW on Linux and cross-compile? Why not just use a Windows action runner? Because it would be too slow?

    Doing compilation (or, frankly, anything more complicated than word processing) on Windows just doesn't make sense. Every popular OS besides Windows - Linux, BSD, even Mac OS, is better, faster, more secure, easier to write correct programs for, and more consistent with other OSes. Official builds of PuTTY, a program which decidedly only makes sense on Windows because Linux (and newer versions of Windows) already comes with ssh, telnet, and nice serial port programs, are cross-compiled for Windows by a Linux machine (and there's a Linux build for testing purposes) because it's so much simpler than bothering with building something inside Windows.

  • RossHRossH Posts: 5,476

    @Electrodude said:
    Doing compilation (or, frankly, anything more complicated than word processing) on Windows just doesn't make sense. Every popular OS besides Windows - Linux, BSD, even Mac OS, is better, faster, more secure, easier to write correct programs for, and more consistent with other OSes. Official builds of PuTTY, a program which decidedly only makes sense on Windows because Linux (and newer versions of Windows) already comes with ssh, telnet, and nice serial port programs, are cross-compiled for Windows by a Linux machine (and there's a Linux build for testing purposes) because it's so much simpler than bothering with building something inside Windows.

    I get where you're coming from, but OS wars are a bit like language wars and editor wars - neither side is ever likely to convince the other. I routinely use both Windows and Linux so I don't really have a "side" on this one.

    I think a more important question when it comes to developing free open-source software is whether you can develop it using free open-source software. I can still do that on both Windows and Linux, but I dropped MacOS when this became impossible. I don't get paid for developing my software, so I don't see why I should have to pay to develop it.

    As long as it remains possible to do this on Windows, I will probably continue to do so. Painful and slow as it sometimes is.

    Ross.

  • @RossH said:

    @Wuerfel_21 said:
    On Windows NT, especially if you have any sort of security software enabled, spawning a new process and attaching it to the console subsystem is comically slow. Reading directory information is also slow. This can combine into a make+gcc style build taking orders of magnitude longer than reasonable.

    Yes, disabling as much Windows security as possible does indeed speed things up - but not enough. Also, there are good reasons why you absolutely want as much security enabled in Windows as you can get (and then go add some more!) so it is not something users should have to do.

    Using MinGW in GH Actions is perfectly fine (assuming use as a cross-compiler on a Linux action runner), I set it up for flexspin/spin2cpp.

    I may be missing something here. Why install MinGW on Linux and cross-compile? Why not just use a Windows action runner? Because it would be too slow?

    Because it's just easier to deal with. The makefile was already set-up to allow for cross-compilation and it's easy to grab MinGW, Bison, etc from the ubuntu package manager. Also I think windows runners used to not exist / cost extra?

    Incidentally, for linux binary builds, I set it up to use musl-gcc to link the entire C library in to avoid dependency conflicts. It doesn't seem to increase the file size noticeably.

    The real big thing with actions IMO is that you can use it to auto-run any test scripts you have. For flexspin I made a modified version of spinsim, so P1 code can be tested end-to-end.

    Yes, I can see that would be useful. It would have saved me from a few embarrassing releases :)

    I know there was a P2 version of spinsim - do you use that?

    There's some P2-themed code in there, but I remember I couldn't get it to work. Possibly based on some FPGA prototype version...

  • RossHRossH Posts: 5,476

    This is just a bit of advice for Catalina users ...

    I've been doing some more investigation into memory issues, because despite many recent Lua improvements there is still an outstanding issue that can affect both C and Lua programs.

    The issue occurs when the heap (which grows up from low memory) and the stack (which grows down from high memory) are both allocated in Hub RAM and collide. There is no simple fix for this issue. It cannot be solved just by having the dynamic memory allocation functions (i.e. malloc() and friends) check the stack before allocating from the heap. It also requires that the heap be checked before every operation that allocates from the stack - e.g. on every C function call. This would be horribly expensive, but I will consider adding it to a future release as a debugging aid.

    This issue only affects programs where the heap and stack are both in Hub RAM (i.e. CMM, NMM, LMM and XMM SMALL programs, but not XMM LARGE programs). Lua programs are more prone to suffer from it because they typically do more dynamic memory allocation than C programs, and recent Catalina releases - which allow larger and more complex Lua programs to execute from Hub RAM - have made it more likely to occur.

    Hoist with my own petard! :(

    The best option currently available is to minimize the need for programs to allocate more heap space just because they do not have a single contiguous block of suitable size by de-fragmenting the heap.

    In C, there is a library function to do this. Add a line like the following to the C program:

    malloc_defragment(); // defragment the C heap

    In Lua (as documented in Lua on the Propeller 2 with Catalina ) the sbrk() function can be used. Add a line like the following to the Lua program:

    propeller.sbrk(true) -- defragment the C heap

    In many cases just adding one line resolves the issue completely. The best place to add such a line is just before executing code that does large amounts of dynamic memory allocation (e.g. in Lua, lots of string operations). If it is not easy to identify a specific place in the program, the next best option would be to do it periodically.

    Ross.

  • Help!

    I've (re)discovered Catalina, and recently downloaded 8.1.something. And it's all doing what it should, and fun is being had. But..... I'm having difficulties with payload. It wants DLLs, which don't appear to be in the download, DLLs which I don't have, and although googling has found me things of about the right name, they don't result in a working payload.

    Is there a definitive place where I can get the right DLLs?

    For giggles, trawling through backups got me Catalina from a retired PC, and payload version 3.3 works, but strangely, it seems to lack features...

    Thanks,. David.

  • evanhevanh Posts: 16,022

    The DLL names?

  • DLL names? Well, I think , from error messages, it is libintl-8.dll and libiconv-2.dll, versions of which I've found, and there are also versions without the hyphen, eg libiconv2.dll. I'm hoping for a definitive answer as to whats required. Delving into the source tree hasn't provided the illumination I sought.

  • RossHRossH Posts: 5,476
    edited 2024-11-22 05:43

    @"David Buckley" said:
    Help!

    I've (re)discovered Catalina, and recently downloaded 8.1.something. And it's all doing what it should, and fun is being had. But..... I'm having difficulties with payload. It wants DLLs, which don't appear to be in the download, DLLs which I don't have, and although googling has found me things of about the right name, they don't result in a working payload.

    Is there a definitive place where I can get the right DLLs?

    For giggles, trawling through backups got me Catalina from a retired PC, and payload version 3.3 works, but strangely, it seems to lack features...

    Thanks,. David.

    Hi David

    If you downloaded from SourceForge, the DLLs should be installed by the Windows installer.

    If you downloaded from GitHub, the DLLs should be in the GitHub binary assets - perhaps in a previous release if there have been no binary changes. For instance, for 8.1.1 or 8.1.2, use the binary assets for 8.1.1 here and here. You need both.

    And yes, payload 3.3 is very old and will be missing lots of features! It is probably one of those features that requires the additional DLLs.

    Ross.

    Edited: You need both the Windows binary asset and the Windows Geany binary asset to get all the necessary DLLs.

  • RossHRossH Posts: 5,476
    edited 2024-11-22 05:53

    @"David Buckley" said:
    DLL names? Well, I think , from error messages, it is libintl-8.dll and libiconv-2.dll, versions of which I've found, and there are also versions without the hyphen, eg libiconv2.dll. I'm hoping for a definitive answer as to whats required. Delving into the source tree hasn't provided the illumination I sought.

    Ah yes - it seems you now also need both libintl-8.dll and libiconv-2.dll in your PATH. I have so many things in my PATH variable (including paths to other versions of these DLLs) that I'd not noticed that omission before - thanks.

    These files exist in "%LCCDIR%"\catalina_geany\bin, so to do this either copy those two files from "%LCCDIR%"\catalina_geany\bin to "%LCCDIR%"\bin or add "%LCCDIR%"\catalina_geany\bin to your path.

    For example, execute the following in a Catalina Command Line window:

    copy "%LCCDIR%"\catalina_geany\bin\libintl-8.dll "%LCCDIR%"\bin
    copy "%LCCDIR%"\catalina_geany\bin\libiconv-2.dll "%LCCDIR%"\bin
    

    Ross.

  • And with those two copy commands, all is well and working.

    Thank you Ross!

  • RossHRossH Posts: 5,476

    @"David Buckley" said:
    And with those two copy commands, all is well and working.

    Thank you Ross!

    No worries. I'll fix this properly in the next release.

  • RossHRossH Posts: 5,476
    edited 2024-12-07 03:03

    I thought I would have had time by now to do a new Catalina release, but I have not. So just to fix the issue about missing DLLs being reported by payload, I have updated the current Windows binary releases of Catalina 8.1.2 (the Linux releases do not need DLLs).

    On SourceForge I have updated the Windows installer for 8.1.2 to include the missing DLLs. Or you can just use the copy commands given here) to do this on an existing installation of 8.1.2 - that's the only change.

    On GitHub I have added new binary assets for Windows to release 8.1.2. Previously, this release said to use the assets from 8.1.1, but those are missing the DLLs.

    Ross.

  • RossHRossH Posts: 5,476

    I have just ordered a new Raspberry Pi (a Pi 500), but in preparation and while waiting for it to arrive, I dug out my old Pi 3 and rebuilt Catalina using that. I haven't done that for literally years, but it all went well. It's actually much easier now than it used to be - the Pi OS seems to have improved since I last used it.

    It's a bit of a squeeze on the Pi 3 because it only has one GB of RAM, but it works. It should be better on the Pi 500 (which has a faster chip and also has 8GB of RAM).

    On the Pi you currently have to build Catalina from source. I found one issue in the bin/p2_asm script (seems to be a difference between Ubuntu and Debian) and one problem with the xterm program used by Catalina Geany (ditto) which I will look into - but other than that everything I have tried so far works fine. If anyone finds any issues, please let me know.

    I have held off upgrading to Windows 11 so far because every version of Windows seems worse than the one before, but I will eventually have to do so since Windows 10 support is finally coming to an end. But if the Pi 500 works out well, I am tempted to make the Pi my default development platform. On the Pi 500, I expect it to be faster to build Catalina than it will be on my Core i7 running Windows 11.

    I have updated the BUILD.TXT file to include detailed Pi instructions. These updates are currently on Github only. I will include them in the SourceForge packages in the next release.

    Ross.

  • RossHRossH Posts: 5,476
    edited 2024-12-17 06:30

    @RossH said:

    I found one issue in the bin/p2_asm script (seems to be a difference between Ubuntu and Debian) ...

    More on this one. It's a bit weird. On the Pi, using #!bin/bash -login as the first line of a bash script breaks a few things downstream, including make :(

    Really this line should have been #!bin/bash --login (i.e. two dashes, not one) - but in most cases the scripts don't need login shells anyway, so I have simply removed the -login option, and now everything seems to work properly on the Pi.

    What bash -login actually does on the Pi is a bit of a mystery.

    GitHub has been updated.

  • RossHRossH Posts: 5,476
    edited 2024-12-17 07:09

    Interesting ...

    Pi 500 arrived today. Works a treat! And - as expected - much faster than the Pi 3. And faster that Windows on my PC.

    Times (minutes) to build all Catalina binaries and libraries from source:

    2.4Ghz Pi 500 (8Gb RAM, 32Gb SD, Pi OS)  :   2 
    2.6Ghz i7 (16Gb RAM, 200GB SSD, Linux)   : 1.5
    2.6Ghz i7 (16Gb RAM, 200GB SSD, Windows) :  16
    

    Times (minutes) to build all the usual Catalyst P2 demo binaries from source
    (i.e. P2_DEMO.ZIP, P2_EDGE.ZIP, P2_EDGE_VGA.ZIP. P2_EVAL.ZIP, P2_EVAL_VGA.ZIP):

    2.4Ghz Pi 500 (8Gb RAM, 32Gb SD, Pi OS)  : 23 
    2.6Ghz i7 (16Gb RAM, 200GB SSD, Linux)   : 13 
    2.6Ghz i7 (16Gb RAM, 200GB SSD, Windows) : 69
    

    My Windows PC cost several thousand dollars. The Pi 500 costs under $100.

    I certainly won't be replacing my PC when it dies - or using Windows as my primary development environment from now on.

  • evanhevanh Posts: 16,022

    Nice. I guess that shows the ability of a good cache when the CPU/RAM is performant. The SD card won't be a patch on the SSD.

  • @RossH said:

    @RossH said:

    I found one issue in the bin/p2_asm script (seems to be a difference between Ubuntu and Debian) ...

    More on this one. It's a bit weird. On the Pi, using #!bin/bash -login as the first line of a bash script breaks a few things downstream, including make :(

    Really this line should have been #!bin/bash --login (i.e. two dashes, not one) - but in most cases the scripts don't need login shells anyway, so I have simply removed the -login option, and now everything seems to work properly on the Pi.

    What bash -login actually does on the Pi is a bit of a mystery.

    As described in §6.2 of the bash manual, it makes bash source /etc/profile and ~/.bashrc, which typically do things only suitable for interactive shells, before running your script, so you definitely don't want -login in a hashbang. Is there a reason you had it there to begin with?

  • RossHRossH Posts: 5,476
    edited 2024-12-17 23:38

    @Electrodude said:

    As described in §6.2 of the bash manual, it makes bash source /etc/profile and ~/.bashrc, which typically do things only suitable for interactive shells, before running your script, so you definitely don't want -login in a hashbang. Is there a reason you had it there to begin with?

    I originally added the --login option so users could have Catalina set up automatically in each new interactive shell (e.g. by adding commands to their .bashrc file). But it is not strictly necessary unless the scripts are going to be initiated other than from an interactive shell (e.g. if they were to be initiated from a desktop menu or icon).

    For example, I do NOT set up Catalina automatically because I am typically using multiple versions of Catalina at the same time (e.g. the current release and the next release). So I would typically do something like the following to use release X.X in a particular shell:

    cd ~/Catalina_X.X
    export LCCDIR=`pwd`
    source use_catalina
    

    I've been doing some reading on both bash and make, and I think it may have to do with the fact that interactive shells use different job control to non-interactive shells. The details of job control are platform dependent, and it is possible that on the Pi the bash job control and the make job control are interacting in a different way to other flavors of Linux.

    Ross.

  • RossHRossH Posts: 5,476

    I've revised the Pi OS instructions (in BUILD.TXT) to fix an issue with the suggested .Xdefaults settings, and also to recommend that Catalina should be installed in /opt/catalina. I've also added some desktop icons - one for Catalina Command Line and another for Catalina Geany.

    The icons have to use hardcoded paths, so they use /opt/catalina - which means they will work "out of the box" if you installed Catalina as recommended. Otherwise they can be edited to suit (they are simple text files). Using the icons means you no longer have to manually set the LCCDIR and PATH variables (although you still can if you want). This eliminates the need to use login shells.

    As usual, Github has been updated. Sourceforge will be updated on the next release.

Sign In or Register to comment.