If you're looking for a small, inexpensive device the ESP32 is quite nice for what it is. For a similar price the Pi Zero W is right there as well. I've been using them as the *nix universe is much easier for me to get my head quickly wrapped around something much faster than the Extensa/Ardiono etc. toolset. Just relating what I've been doing along these lines recently. The Pi Zero has a lot more potential than the ESP32 as well. You need to log large amounts of data? Use a big uSD card, problem solved very quickly.
Pi Zero W is great value, but seems to have supply limits. Many stores tag as MAX PER CUSTOMER: 1 and Pi Zero (no WiFi) as out of stock....
It would make a great companion for P2 Debug & a P2D2 module already has a Pi-connector.
For a more integrated one-board design, the ESP32-S2-WROVER data says it has 4MB Flash default, (and 2MB PSRAM) but can order with 16MB Flash, and that -S2 has FS-USB and WiFi links.
The ESP32-S2-Saola-1 is a DIP module, with a USB connector, and a CP2102N USB-UART, as the debug link. (the S2 inbuilt USB may need more work to get that working ?)
Question: If I want to write C code for the PC, what compiler/debugger should I use? I assume that these days, there are some free options? Is any compiler worth spending money on?
Question: If I want to write C code for the PC, what compiler/debugger should I use? I assume that these days, there are some free options? Is any compiler worth spending money on?
If the PC is running Linux, it would be GCC for sure. If Windows, it might still be best to try to use GCC if you wish to have some portability options later for supporting other platforms, depending on how you write your C. Simple or "vanilla" C code should be reasonably portable already however if you start to use Windows specific libraries and tools etc, then this portability can start to disappear.
I've heard there are some gnu tools supported in the shell now with the latest Windows, but I think you probably still need to install either Cygwin or MinGW/Msys with GCC in a Windows setup and configure paths/environment stuff etc. Unfortunately this makes installation of GCC on Windows a bit clunkier but it's only a one off thing. Once done you can then simply type "make" or "gcc" etc from these environments and you are up and going right away without needing to learn IDEs, using whatever editors of choice you wish.
For debugging things are not as easy, and you need to learn the text based GDB commands or find an IDE that connects and uses it underneath and hides some of this for you, presenting a more high level debug interface to step through the code and add breakpoints etc, like you can with other integrated Windows dev tools such as Visual Studio. Debugging sort of gets complex either way, via text commands or IDE until you know how it all goes together.
Question: If I want to write C code for the PC, what compiler/debugger should I use? I assume that these days, there are some free options? Is any compiler worth spending money on?
gcc as mentioned above is widely used, and you may not want just one, so Visual Studio is another choice.
If you are new to C, you also want a good syntax highlighting editor, with some language awareness - ways to quickly see declarations etc .
This is an example for Visual Studio Code https://code.visualstudio.com/docs/editor/editingevolved
So, I would probably write the compiler to be just a compiler, no GUI.
Is it reasonable to handle file I/O without getting OS-specific? Does C have standard libraries to facilitate file I/O? Right now, my 386 code just conveys memory structures back to the IDE and the IDE reads and write the files. I mean, could I write a command-line application that automatically works across platforms?
Would it be good to avoid C++ and stick with straight C ?
Yes C certainly supports standard file IO in a portable way using the stdio.h library functions and has so for a long time. You can very easily write a command line tool that is cross-platform this way.
As far as C/C++ I'd say if you are not already a good C++ programmer then probably stick with C for now. It will give you more control and coming from low level coding such as 386 assembly it will probably suit interfacing better and be easier to map to your data structures. I mean you can still do it in C++, it just may take longer to get up to speed and make the most of the extra features it offers over plain C.
If you want to develop on your windows PC, but still be cross platform. Then you probably want to get msys2 ( https://msys2.org ). It will give you a command line terminal that is bash and a full set of gcc dev tools. So you can work just like you would on linux, but in a window on Windows.
It will build into a windows executable by default, but I believe you can setup cross compiling if needed. The source code you write should be compile-able on linux and macos if you avoid using Windows specific libraries.
C and C++ both have standard libraries that cover what you will want (like file i/o). You can look at ersmith's github repositories for lots of good examples of cross platform code. Also, OpenSpin source is an alright place to look (especially since it was ported from your x86 code so you will recognize things a bit (names of stuff and how it works). There's no real reason to avoid using most C++ features, however, for your sanity's sake I would stick to the basics.
As for editors, I personally prefer using Visual Studio 2019 (free), it has some really nice programmer friendly features. My workflow for OpenSpin is to work in Visual Studio, editing, compiling, testing, and so on, then switch to the msys2 window and run make to have it build with gcc to make sure my source code was good for cross-platform. Worked pretty well. You can use whatever editor you like, but I'd recommend one with C/C++ highlighting, and with decent code navigation ability (go to definition and go to declaration, and easily going back to where you were).
Get the 64bit one, it has both 64bit and 32bit options for your commandline window and whatnot. So you can open the 32bit msys2 window and build using the 32bit toolchain, or open the 64bit msys2 window and use the 64bit toolchain.
Get the 64bit one, it has both 64bit and 32bit options for your commandline window and whatnot. So you can open the 32bit msys2 window and build using the 32bit toolchain, or open the 64bit msys2 window and use the 64bit toolchain.
For ease of portability with any luck you can just do standard file IO, classic malloc/free heap allocation and keep your application single threaded if at all possible. If you need to create independent threads and use other low level OS synchronization capabilities it may become a little harder to port, you'd then need to use pthreads, more POSIX API stuff etc. Still possible of course, but it gets more involved. For a typical command line application hopefully just a single threaded (possibly multi-pass) file read/write access model will work out okay for you.
For ease of portability with any luck you can just do standard file IO, classic malloc/free heap allocation and keep your application single threaded if at all possible. If you need to create independent threads and use other low level OS synchronization capabilities it may become a little harder to port, you'd then need to use pthreads, more POSIX API stuff etc. Still possible of course, but it gets more involved. For a typical command line application hopefully just a single threaded (possibly multi-pass) file read/write access model will work out okay for you.
Yep. There's no reason to make this complicated. A single thread is all that's needed.
So what's the difference between 32-bit and 64-bit implementations? Is it just that the 64-bit can address more memory more easily and take advantage of 64-bit data types?
Well the processors have been 64bit capable for quite a while but I guess not everyone has changed. There's probably a reasonable amount of 32 bit installs around in legacy systems that people installed some time back that they are somewhat stuck with but I wouldn't really know the proportion.
Now running a 32 bit program on a 64 bit system is still possible as a kind of compatibility mode. The 32 bit program is only going to see its 2 GB user space.
I am going to put something out there from left field. I use Visual Studio with the .NET framework. The new paradigm with .NET Core has the ability to build for Windows, or Linux. x86, x64, or ARM. It packages everything, including the framework and shared object dependencies into the app package. No need for Mono or any of that garbage.
I recently built a command line application in C# on my x64 laptop that I start as a service on a RaspberryPi running linux. There is a small caveat that I had to build the serial port on the linux box. Also, I don't know if the apps can display in xwindows. Haven't tried that.
I am not sure how mature .NET Core is yet, or how well cross platform is implemented yet. As I said, I had issues with the serial port object. But, building traditional Windows "Winform" applications with C# and .NET is simply a dream to work with.
I know this is going to be an unpopular post, Just wanted to throw that out there.
I think if you plan to release a tool Chip, it won't be too difficult to release the two versions 32 & 64 bit. They are just two separate builds and can just come from the same source, so long as you don't get fancy and try to use explicit 64 bit data values or special intrinsics in your code which may not work 100% the same or even be supported by the 32 bit systems. Can't see why you'd need to.
It's probably also useful to learn in advance what you might need to bundle with the command line application for them to "just work" under Windows such that the average user won't need to install too many other utilities or things such as GCC itself... I'm not sure what other files/DLLs might have to come along for the ride in that situation but the documentation of Cygwin/MinGW/MSYS etc should cover it.
For everything else including non-Windows/ non-PC systems such as Raspi etc hopefully you'll also be in a position to release the actual source code too so this tool can be built by others from source on all the platforms using a Makefile or other script which is what typically happens in such situations. Otherwise you'll need to provide binaries for other platforms which will be harder to manage and not everyone will want to use/trust a binary.
If you kept the code straight forward, then we could perhaps run it on P2 with micropython.
Python is an indentation language so you'd be right at home. None of those horrid { and } stuff.
I use Visual Studio Code (free) for editing and it has good syntax highlighting, and there is a terminal mode for running your code within VS without the need for external compilation and running your code. You just use external compilation if you need to create an .exe.
I'll leave others to comment if this is a viable alternative since I know nothing of C compilation for comparison, but it certainly sounds like GCC etc is way more difficult that Python.
...I know nothing of C compilation for comparison, but it certainly sounds like GCC etc is way more difficult that Python.
GCC is a powerful and complex tool internally with a myriad of options, but if you write simple code you generally won't need too many command line settings and can mostly just use all its defaults which makes things much easier. In those cases it is not too hard to use and you won't need to worry about too much about how it works. It definitely gets harder to use if you want something customised specifically for a particular reason or you have to wade into details with linker settings or performance optimisation etc, but you'd sort of need to do that anyway with any tool once you need to change something down at a low level.
I think for this type of standard command line application, typical GCC compiler defaults should hopefully suffice if you just follow normal programming conventions and don't do crazy things like use massively sized stack variables that blow memory limits etc. I guess if you were allocating massive structures dynamically you might have to consider a system's heap limits as well. But on a typical PC I think the default resources available should be large enough by default to work out ok.
The idea of having MicroPython compiling tools is interesting. Whether it could all fit is hard to know until it is coded. If it is a compact enough tool and a local P2 filesystem (flash or SD) is eventually supported by either FlexC or p2GCC the P2 could probably also run this as a C based tool locally as well, but be much faster.
Chip,
If you are just making a command line tool (compiler) then just stick with 32bit. You are extremely unlikely to run into memory issues. If for some reason you do end up needing 64bit later on, you can trivially convert over to it.
It's also pretty rare to run into a system that can't run 64bit code. Mainly it's really ancient stuff, or the rare person that refuses to upgrade from some ancient version of windows, or chose 32bit over 64bit even though their machine is 64bit and has more than 4GB of ram (which is all sitting unused on 32bit systems). So, if whatever you do, does end up needing 64bit, you won't be excluding many (if any) users.
Comments
Pi Zero W is great value, but seems to have supply limits. Many stores tag as MAX PER CUSTOMER: 1 and Pi Zero (no WiFi) as out of stock....
It would make a great companion for P2 Debug & a P2D2 module already has a Pi-connector.
For a more integrated one-board design, the ESP32-S2-WROVER data says it has 4MB Flash default, (and 2MB PSRAM) but can order with 16MB Flash, and that -S2 has FS-USB and WiFi links.
The ESP32-S2-Saola-1 is a DIP module, with a USB connector, and a CP2102N USB-UART, as the debug link. (the S2 inbuilt USB may need more work to get that working ?)
Seriously
Totally agree. Chip, PLEASE don't lock your work down via involvement with a proprietary source.
If the PC is running Linux, it would be GCC for sure. If Windows, it might still be best to try to use GCC if you wish to have some portability options later for supporting other platforms, depending on how you write your C. Simple or "vanilla" C code should be reasonably portable already however if you start to use Windows specific libraries and tools etc, then this portability can start to disappear.
I've heard there are some gnu tools supported in the shell now with the latest Windows, but I think you probably still need to install either Cygwin or MinGW/Msys with GCC in a Windows setup and configure paths/environment stuff etc. Unfortunately this makes installation of GCC on Windows a bit clunkier but it's only a one off thing. Once done you can then simply type "make" or "gcc" etc from these environments and you are up and going right away without needing to learn IDEs, using whatever editors of choice you wish.
For debugging things are not as easy, and you need to learn the text based GDB commands or find an IDE that connects and uses it underneath and hides some of this for you, presenting a more high level debug interface to step through the code and add breakpoints etc, like you can with other integrated Windows dev tools such as Visual Studio. Debugging sort of gets complex either way, via text commands or IDE until you know how it all goes together.
gcc as mentioned above is widely used, and you may not want just one, so Visual Studio is another choice.
If you are new to C, you also want a good syntax highlighting editor, with some language awareness - ways to quickly see declarations etc .
This is an example for Visual Studio Code
https://code.visualstudio.com/docs/editor/editingevolved
Is it reasonable to handle file I/O without getting OS-specific? Does C have standard libraries to facilitate file I/O? Right now, my 386 code just conveys memory structures back to the IDE and the IDE reads and write the files. I mean, could I write a command-line application that automatically works across platforms?
Would it be good to avoid C++ and stick with straight C ?
As far as C/C++ I'd say if you are not already a good C++ programmer then probably stick with C for now. It will give you more control and coming from low level coding such as 386 assembly it will probably suit interfacing better and be easier to map to your data structures. I mean you can still do it in C++, it just may take longer to get up to speed and make the most of the extra features it offers over plain C.
It will build into a windows executable by default, but I believe you can setup cross compiling if needed. The source code you write should be compile-able on linux and macos if you avoid using Windows specific libraries.
C and C++ both have standard libraries that cover what you will want (like file i/o). You can look at ersmith's github repositories for lots of good examples of cross platform code. Also, OpenSpin source is an alright place to look (especially since it was ported from your x86 code so you will recognize things a bit (names of stuff and how it works). There's no real reason to avoid using most C++ features, however, for your sanity's sake I would stick to the basics.
As for editors, I personally prefer using Visual Studio 2019 (free), it has some really nice programmer friendly features. My workflow for OpenSpin is to work in Visual Studio, editing, compiling, testing, and so on, then switch to the msys2 window and run make to have it build with gcc to make sure my source code was good for cross-platform. Worked pretty well. You can use whatever editor you like, but I'd recommend one with C/C++ highlighting, and with decent code navigation ability (go to definition and go to declaration, and easily going back to where you were).
Which of these should I download?
"x86_64" for 64-bit
"i686" for 32-bit Windows
Okay, but what's the difference?
Yep. There's no reason to make this complicated. A single thread is all that's needed.
So what's the difference between 32-bit and 64-bit implementations? Is it just that the 64-bit can address more memory more easily and take advantage of 64-bit data types?
Is there much need for being 32-bit compatible these days, or is everything 64-bit now?
I think Evanh and TonyB_ have older machines for curious reasons.
Classrooms tend to have older machines, and may go back as far as 32 bit.
https://en.wikipedia.org/wiki/Physical_Address_Extension
32-bit systems simply cannot. They rely on some 64 bit extensions.
https://www.brianmadden.com/opinion/The-4GB-Windows-Memory-Limit-What-does-it-really-mean
Now running a 32 bit program on a 64 bit system is still possible as a kind of compatibility mode. The 32 bit program is only going to see its 2 GB user space.
I recently built a command line application in C# on my x64 laptop that I start as a service on a RaspberryPi running linux. There is a small caveat that I had to build the serial port on the linux box. Also, I don't know if the apps can display in xwindows. Haven't tried that.
I am not sure how mature .NET Core is yet, or how well cross platform is implemented yet. As I said, I had issues with the serial port object. But, building traditional Windows "Winform" applications with C# and .NET is simply a dream to work with.
I know this is going to be an unpopular post, Just wanted to throw that out there.
It's probably also useful to learn in advance what you might need to bundle with the command line application for them to "just work" under Windows such that the average user won't need to install too many other utilities or things such as GCC itself... I'm not sure what other files/DLLs might have to come along for the ride in that situation but the documentation of Cygwin/MinGW/MSYS etc should cover it.
For everything else including non-Windows/ non-PC systems such as Raspi etc hopefully you'll also be in a position to release the actual source code too so this tool can be built by others from source on all the platforms using a Makefile or other script which is what typically happens in such situations. Otherwise you'll need to provide binaries for other platforms which will be harder to manage and not everyone will want to use/trust a binary.
If you kept the code straight forward, then we could perhaps run it on P2 with micropython.
Python is an indentation language so you'd be right at home. None of those horrid { and } stuff.
I use Visual Studio Code (free) for editing and it has good syntax highlighting, and there is a terminal mode for running your code within VS without the need for external compilation and running your code. You just use external compilation if you need to create an .exe.
I'll leave others to comment if this is a viable alternative since I know nothing of C compilation for comparison, but it certainly sounds like GCC etc is way more difficult that Python.
https://hackaday.io/project/10576-mucpu-an-8-bit-mcu/log/36010-writing-an-assembler-in-python
If you want to run .py scripts, you need python installed.
I think for this type of standard command line application, typical GCC compiler defaults should hopefully suffice if you just follow normal programming conventions and don't do crazy things like use massively sized stack variables that blow memory limits etc. I guess if you were allocating massive structures dynamically you might have to consider a system's heap limits as well. But on a typical PC I think the default resources available should be large enough by default to work out ok.
The idea of having MicroPython compiling tools is interesting. Whether it could all fit is hard to know until it is coded. If it is a compact enough tool and a local P2 filesystem (flash or SD) is eventually supported by either FlexC or p2GCC the P2 could probably also run this as a C based tool locally as well, but be much faster.
If you are just making a command line tool (compiler) then just stick with 32bit. You are extremely unlikely to run into memory issues. If for some reason you do end up needing 64bit later on, you can trivially convert over to it.
It's also pretty rare to run into a system that can't run 64bit code. Mainly it's really ancient stuff, or the rare person that refuses to upgrade from some ancient version of windows, or chose 32bit over 64bit even though their machine is 64bit and has more than 4GB of ram (which is all sitting unused on 32bit systems). So, if whatever you do, does end up needing 64bit, you won't be excluding many (if any) users.