Status of P2 C/C++ Compiler?

12357

Comments

  • OK, obviously we don't want to touch the Release_1_0 branch. So it seems like we should work with the master branch, and maintain it for both P1 and P2. The main focus will be P2, but we need to give Parallax the option of upgrading the P1 to the master branch at some point in the future.

    DavidZemon, does your server build for Windows, MacOS, and various flavors of Linux? And can your build process be moved to other machines? Each developer will need to do their own builds so that they can test changes they make.

    I not sure how we would automate the tests. That's way outside of my expertise. I think it would be good to come up with a set of test programs that people could run to verify that their code is able to pass those tests. As time goes on we could add more test programs to the suite.

    At my previous jobs we had an SQA group that would run build-validation tests each day. This helps to determine if someone broke the build, or introduced a bug. It's good to catch bugs as earlier as possible to reduce the amount of searching that has to be done on the commits. However, bugs will always get through. I recall many times that I had to do a binary search on month's of daily builds to locate when a bug got checked in.

    I'm not saying that we need to do daily builds and validation, but it's a good idea to do it periodically in case something bad does get checked in.
  • I'm more than happy to help with testing GCC builds. I have Windows 10 and various flavors of Linux available. I could setup a Windows 7 machine if needed to get coverage on older Windows.

    It does seem like the best thing would be to have a series of test programs that get compiled by the GCC build and if deemed necessary, uploaded to an actual P1/P2 to be run. If needed, a 2nd P1/P2 could monitor pins to verify operation. Then one machine could sit and do all of this automatically. If we could get test coverage on the things Parallax thinks they need, then we could make sure that there is a path to distributing new builds to those users (perhaps on a fixed schedule like once or twice a year to allow for some additional human in the loop testing).
  • DavidZemonDavidZemon Posts: 2,700
    edited 2018-12-04 - 04:47:42
    Dave Hein wrote: »
    DavidZemon, does your server build for Windows, MacOS, and various flavors of Linux? And can your build process be moved to other machines? Each developer will need to do their own builds so that they can test changes they make.

    I actually just started a thread this evening about the server: https://forums.parallax.com/discussion/169383/unofficial-parallax-continuous-integration-build-server#latest
    I build Windows binaries via MinGW. I think those are pretty good across a wide variety of Windows OSes, but won't swear to it. If needed, we can get another TeamCity agent running and compiling natively on Windows... though I don't think most of the build systems in the Parallax community support that... sure would be nice if folks made use of CMake instead of GNU Make for everything....
    I don't worry much about "various flavors of Linux" since the only thing likely to cause incompatibilities is glibc. So far, no one has complained that I'm using too new of a glibc version. If anyone does, I'll switch to a Docker image with an older version of GCC for everything, just as I had to do with PropGCC when Ubuntu started shipping too new of a version of GCC to build PropGCC.
    For MacOS.... see my sad rant in the thread linked above. I'd love to start building MacOS binaries, but I need (access to) Mac hardware for that to happen. So far, no one has volunteered to help.
    Dave Hein wrote: »
    I not sure how we would automate the tests. That's way outside of my expertise. I think it would be good to come up with a set of test programs that people could run to verify that their code is able to pass those tests. As time goes on we could add more test programs to the suite.

    Thankfully it (the automation aspect) is well within my expertise. I won't pretend to be an expert test planner - but I can implement and automate with the best of them.
    Dave Hein wrote: »
    At my previous jobs we had an SQA group that would run build-validation tests each day. This helps to determine if someone broke the build, or introduced a bug. It's good to catch bugs as earlier as possible to reduce the amount of searching that has to be done on the commits. However, bugs will always get through. I recall many times that I had to do a binary search on month's of daily builds to locate when a bug got checked in.

    I'm not saying that we need to do daily builds and validation, but it's a good idea to do it periodically in case something bad does get checked in.

    I am saying we need daily builds and daily (automated) validation. In fact, I'd like to see more than daily. I'd like new builds every time someone merges into the default branch, whether that's "master" or "dev" or "nightly" or whatever it gets called. I'm willing to put a little more money into a faster server capable of handling the load, but AWS is also an option (as in: start up a new AWS virtual machine every time a build gets triggered and then shut it down when all queued builds finish). AWS costs more in the long term, but it can be a great way to augment my slow laptop-pretending-to-be-a-server during a couple short months of heavy development. There are lots of ways to make this happen, each with their own drawbacks (usually it's a balance of money, expected server maintenance, and speed). Ideally... Parallax would decide they care enough about this to put a tiny bit of money into it.
    I'm more than happy to help with testing GCC builds. I have Windows 10 and various flavors of Linux available. I could setup a Windows 7 machine if needed to get coverage on older Windows.

    YAY! More help!
    It does seem like the best thing would be to have a series of test programs that get compiled by the GCC build and if deemed necessary, uploaded to an actual P1/P2 to be run. If needed, a 2nd P1/P2 could monitor pins to verify operation. Then one machine could sit and do all of this automatically. If we could get test coverage on the things Parallax thinks they need, then we could make sure that there is a path to distributing new builds to those users (perhaps on a fixed schedule like once or twice a year to allow for some additional human in the loop testing).

    Agreed. As for testing pin operations: you may not need a second chip. I do a lot of automated tests with PropWare by having one cog writing pins and another cog reading those same pins, rather than using an entirely separate chip. Same idea as you, but much easier to implement.
    We'll also probably want to set up a test rig of sorts when we get to the HAL (Simple v2). Something with a ton of different peripherals, all hooked up to specific pins as specified in a document somewhere (so others can build their own test rig and run the same code).
    All of this is stuff I know how to do and have done in the past. I'm just waiting for the project to start so we can get to work.
    David
    PropWare: C++ HAL (Hardware Abstraction Layer) for PropGCC; Robust build system using CMake; Integrated Simple Library, libpropeller, and libPropelleruino (Arduino port); Instructions for Eclipse and JetBrain's CLion; Example projects; Doxygen documentation
    CI Server: http://david.zemon.name:8111/?guest=1
  • DavidZemon,
    I love continuous integration builds (like what you have OpenSpin and other things on), we have that at work for when anyone submits changes and it emails us when builds break. Helps me a lot with catching issues for other platforms that I can't easily build.

    I'd love it if we got PropGCC updated to the latest version of GCC. I will help where I can, probably mostly with testing, but I might be able to help with implementation details of the backend once it's up and running (like adding support for whatever builtins will be needed, or library stuff?).

  • Dave Hein wrote: »
    OK, obviously we don't want to touch the Release_1_0 branch. So it seems like we should work with the master branch, and maintain it for both P1 and P2. The main focus will be P2, but we need to give Parallax the option of upgrading the P1 to the master branch at some point in the future.
    Why don't you want to use Eric's repository? It is based on a newer version of GCC.

  • So we basically have four choices:
                           PROS                         CONS
    1) Release_1_0   Works with simple library      Old version of GCC
                     Proven in the field            Has bugs
    
    2) Master Branch Uses a newer version of GCC    Untested in the field
                     Contains bug fixes             Issues with simple library
                     Contains some P2 changes
    
    3) Eric's Repo   Newer GCC than master branch   Not used as much as master branch
                     Contains bug fixes             Issues with simple library
                     Contains even more P2 changes
    
    4) Latest GCC    Contains latest GCC features   Requires extra work for P1
                     More bug fixes                 Requires extra work for P2
                                                    Untested in the field
                                                    Likely issues with simple library
    
    Anybody have anything to add to the list of pros and cons? Or are there any other choices for GCC?
  • The master branch of the Parallax propgcc repository does not have a newer version of GCC as far as I know. It just contains improvements and bug fixes over the release_1_0 branch.
  • Whatever branch we go with, I'm on board.
    I think it's important to support the newest version of GCC since we are kind of laying out the ground work for those who follow.
  • As GCC versions greater than the Release_1_0 branch have issues with the Simple Library, wouldn't it be best to address those areas AND transfer the existing P1 and out of date P2 changes to the latest GCC, where the newest P2 can be catered for.
  • 78rpm wrote: »
    As GCC versions greater than the Release_1_0 branch have issues with the Simple Library, wouldn't it be best to address those areas AND transfer the existing P1 and out of date P2 changes to the latest GCC, where the newest P2 can be catered for.
    Yes, I think that is what Eric suggested as well. The older versions of GCC generate compiler errors with many newer C compilers that are pickier about the code they compile. I assume many of these errors are resolved by the later releases of GCC.
  • OK, so if we go with option 4 we need to download the latest version of GCC, create a repository for it, and then integrate all of the P1 changes. Once we are confident that the tools function correctly for P1 we can then proceed with P2. Does anybody object to this approach? If there are no objections we need some volunteers to get this going.
  • Dave Hein wrote: »
    OK, so if we go with option 4 we need to download the latest version of GCC, create a repository for it, and then integrate all of the P1 changes. Once we are confident that the tools function correctly for P1 we can then proceed with P2. Does anybody object to this approach? If there are no objections we need some volunteers to get this going.

    I'm all for it. Can we get it started in the ParallaxInc GitHub org to start with?

    Also, is it worth forking this GitHub repository and basing our changes off of the latest released version maybe? https://github.com/gcc-mirror/gcc/releases/tag/gcc-8_2_0-release
    David
    PropWare: C++ HAL (Hardware Abstraction Layer) for PropGCC; Robust build system using CMake; Integrated Simple Library, libpropeller, and libPropelleruino (Arduino port); Instructions for Eclipse and JetBrain's CLion; Example projects; Doxygen documentation
    CI Server: http://david.zemon.name:8111/?guest=1
  • Sounds like you guys are off and running with this. Great!
  • Dave Hein wrote: »
    OK, so if we go with option 4 we need to download the latest version of GCC, create a repository for it, and then integrate all of the P1 changes. Once we are confident that the tools function correctly for P1 we can then proceed with P2. Does anybody object to this approach? If there are no objections we need some volunteers to get this going.

    I'd add two things.

    1. The GCC docs explicitly suggest reaching out to the gcc developers mailing list when people start writing for new architectures in order to ensure that as we build it it's in the best shape to get accepted upstream. The advantage of it being accepted upstream is that the GCC developers will keep your back-end current and bugfixed which means we avoid this dead-end again.
    2. We should share with them what our current state is w.r.t how our gcc tools are. They may have a third way that may be more effective?

    I'm 100% behind where you think this should go Dave, but reaching out I believe is key.
  • red, are you volunteering to reach out to them?
  • Is there enough difference is the programming models between P1 and P2 that they should be handled separately?

    While it might make sense from a pragmatic point of view to treat the P2 as similar to P1and "bend" the current propGCC to fit P2, is that the best way in the long run?





  • ctwardell wrote: »
    Is there enough difference is the programming models between P1 and P2 that they should be handled separately?

    While it might make sense from a pragmatic point of view to treat the P2 as similar to P1and "bend" the current propGCC to fit P2, is that the best way in the long run?




    It's an interesting question. The P1 code base already supports a number of different models: LMM, CMM, XMM, COG.

  • I'm interested in being involved. I need to get more familiar with the GCC backend for P1 though.
  • ctwardell,
    At the pasm instruction level there is a lot of similarities. However, P2 has a bunch of new stuff. Particularly, indirection and hubexec (also you can call back and forth between hubexec and cog/lut spaces).

    I think it might be different enough to warrant being a "new" target type instead of a "mode" of the P1 target.

  • ctwardell wrote: »
    Is there enough difference is the programming models between P1 and P2 that they should be handled separately?

    While it might make sense from a pragmatic point of view to treat the P2 as similar to P1and "bend" the current propGCC to fit P2, is that the best way in the long run?
    Hubexec in the P2 is similar to the P1 LMM mode. I think we just need to create something like another memory model that uses P2 ops instead of the P1 LMM pseudo-ops. The rest of the P2 code is almost identical to P1, except for some changes to instruction names, and using WCZ instead of WC, WZ.

  • David Betz wrote: »
    I'm interested in being involved. I need to get more familiar with the GCC backend for P1 though.

    I'm interested in getting involved too.
    C is my go to language of choice.
  • Dave Hein wrote: »
    red, are you volunteering to reach out to them?

    I don't really have enough background in compilers to speak with too much authority, but as long as y'all are behind me to answer questions I'm more than happy to work on that.

  • Roy Eltham wrote: »
    ctwardell,
    At the pasm instruction level there is a lot of similarities. However, P2 has a bunch of new stuff. Particularly, indirection and hubexec (also you can call back and forth between hubexec and cog/lut spaces).

    I think it might be different enough to warrant being a "new" target type instead of a "mode" of the P1 target.

    I think what's interesting about this question is that there are two competing use-cases.

    Arguably, with the different memory models a separate but parallel target might make sense.

    On the other hand, from a user standpoint that means that the user will end up having to manage two separate tool-chains with the associated complexity that might provide (which arguably, we can abstract away much like arduino does).

    It's not a simple answer.
  • I'm in too. Just need to know where to start.

    Mike
  • Hmmm... I think I'll sit this one out lest there be too many cooks. Seems like it's time to pass GCC on to a new team.
  • David Betz wrote: »
    Hmmm... I think I'll sit this one out lest there be too many cooks. Seems like it's time to pass GCC on to a new team.

    I don't think there is "too many cooks".
    Most corporate software is written by a team of two dozen people. *lol*

    The five or so people interested in here to pull this off is a pretty light weight team if you ask me.
  • David Betz wrote: »
    Hmmm... I think I'll sit this one out lest there be too many cooks. Seems like it's time to pass GCC on to a new team.
    But whose going to write p2load? The loadp2 in p2gcc only handles binary files. Don't we need a loader that understands ELF?
  • David BetzDavid Betz Posts: 13,258
    edited 2018-12-04 - 20:31:44
    Dave Hein wrote: »
    David Betz wrote: »
    Hmmm... I think I'll sit this one out lest there be too many cooks. Seems like it's time to pass GCC on to a new team.
    But whose going to write p2load? The loadp2 in p2gcc only handles binary files. Don't we need a loader that understands ELF?
    You can actually dump an elf file to a binary image that might be good enough. The propeller-load program got started as a simple program that would extract the binary image from an elf file and update its checksum so that the Propeller ROM loader would accept it. It didn't actually read the ELF file directly. It used propeller-elf-objcopy to extract the binary sections.

  • David BetzDavid Betz Posts: 13,258
    edited 2018-12-04 - 21:00:04
    David Betz wrote: »
    Dave Hein wrote: »
    David Betz wrote: »
    Hmmm... I think I'll sit this one out lest there be too many cooks. Seems like it's time to pass GCC on to a new team.
    But whose going to write p2load? The loadp2 in p2gcc only handles binary files. Don't we need a loader that understands ELF?
    You can actually dump an elf file to a binary image that might be good enough. The propeller-load program got started as a simple program that would extract the binary image from an elf file and update its checksum so that the Propeller ROM loader would accept it. It didn't actually read the ELF file directly. It used propeller-elf-objcopy to extract the binary sections.
    Actually, if all you want to do is add ELF handling to loadp2, check out the files loadelf.c and loadelf.h from either the propeller-load or the PropLoader repository. They contain all that is needed to parse enough of the ELF file to satisfy the loader. I don't think they have any other dependencies.
  • David BetzDavid Betz Posts: 13,258
    edited 2018-12-05 - 14:26:01
    I guess I started the P2 GCC status thread but it seems to have taken off on its own. Is anyone keeping track of who has volunteered to work on what? I guess there are many areas where people could contribute. A GCC toolchain consists of lots of pieces:

    1) The GCC backend that generates P2 assembly code
    2) The assembler, either gas from binutils or one of our existing P2 assemblers
    3) The binutils linker and the rest of binutils
    4) The standard C library
    5) The standard C++ library (if we think it will fit)
    6) GDB
    7) The build system
    8 ) Test suites
    9) Integration with SimpleIDE or some editor/GUI
    10) Installers for the various supported platforms
    11) ... (what else?)
Sign In or Register to comment.