Now what was it I said earlier about "reading between the lines"?
However, you have now successfully made your point about propgcc having a faster downloader than Catalina. Not to mention propgcc using -D for defining command line symbols etc etc.
I'll have a look at your loader code when I get time. Now can we please all agree to move on to more substantive issues?
Ross.
I didn't mention GCC. I only pointed to propgcc as the Google Code repository containing the loader code. I seem to remember mentioning this earlier when I had written the loader for ZOG. If you'd like me to send you a zip file containing the sources so you don't have to look in the propgcc project that would be fine with me.
What you don't seem to realize is that I've never used just one compiler in my entire career. I like to have my code compile and run on whatever compilers are available for the platforms I use. In the case of the Propeller, this would include Catalina, ZOG, and PropGCC. I considered buying ICC as well but it seems to be a dead product. I don't just pick one because I like to write portable code and I know that people who use my code may chose different compilers for their own reasons. I *want* my code to run on Catalina. What I don't like is to have to decorate it with lots of typecasts and alternate constructs just because they are required by one compiler. Some of this is of course always necessary since C programs are usually not perfectly portable but it would be nice if something like NULL which is sprinkled over all of my code didn't have to be modified for one target compiler.
By the way, I'm not only interested in Catalina and PropGCC. I also continue to bug Heater about getting the little-endian ZOG working because I think it offers the ability to fit more code in LMM mode which may be helpful both for Propeller-1 and Propeller-2. Even the 128k of hub memory we've been told to expect on P2 is small when used with the huge code sizes generated by the LMM instruction set. The executable for xbasic on a PIC24 is only about 32k but the Catalina executable for the Propeller is almost 200k.
I don't understand why you spend so much time justifying your choice of (void *)0 as the value of NULL. It may be that it is allowed by the standard but it certainly doesn't seem to be the more common definition and since 0 is also allowed by the standard why not use the value that will cause the least trouble for people who are familiar with other compilers?
David, just what is your problem here? I'm not justifying anything, nor was my post intended to rile you. Tor made a reasonable point, and I responded in kind since I thought he (and others) might possibly be interested in learning something they may not have known. If you yourself are no longer interested - even though you raised the issue in the first place - then please feel free to not respond any further.
I didn't mention GCC. I only pointed to propgcc as the Google Code repository containing the loader code. I seem to remember mentioning this earlier when I had written the loader for ZOG. If you'd like me to send you a zip file containing the sources so you don't have to look in the propgcc project that would be fine with me.
What you don't seem to realize is that I've never used just one compiler in my entire career. I like to have my code compile and run on whatever compilers are available for the platforms I use. In the case of the Propeller, this would include Catalina, ZOG, and PropGCC. I considered buying ICC as well but it seems to be a dead product. I don't just pick one because I like to write portable code and I know that people who use my code may chose different compilers for their own reasons. I *want* my code to run on Catalina. What I don't like is to have to decorate it with lots of typecasts and alternate constructs just because they are required by one compiler. Some of this is of course always necessary since C programs are usually not perfectly portable but it would be nice if something like NULL which is sprinkled over all of my code didn't have to be modified for one target compiler.
By the way, I'm not only interested in Catalina and PropGCC. I also continue to bug Heater about getting the little-endian ZOG working because I think it offers the ability to fit more code in LMM mode which may be helpful both for Propeller-1 and Propeller-2. Even the 128k of hub memory we've been told to expect on P2 is small when used with the huge code sizes generated by the LMM instruction set. The executable for xbasic on a PIC24 is only about 32k but the Catalina executable for the Propeller is almost 200k.
David, I will heed your advice and not respond to your post when I am probably overtired.
David, just what is your problem here? I'm not justifying anything, nor was my post intended to rile you. Tor made a reasonable point, and I responded in kind since I thought he (and others) might possibly be interested in learning something they may not have known. If you yourself are no longer interested - even though you raised the issue in the first place - then please feel free to not respond any further.
Ross.
Can you explain why you won't consider changing your definition of NULL to just zero? I make this same unfortunate choice a while back when I was working on a runtime for ZOG. It seems perfectly reasonable to use a void* cast since it makes an integer into a pointer. However, as you yourself have pointed out, a pointer can point to different address spaces on some processors (Harvard architecture machines mostly) so the cast just gets in the way. Some of the articles you've pointed to mention that zero can always be used to initialize a pointer (except in obscure varargs cases) and that the compiler will arrange to translate the zero into whatever bit pattern indicates a null pointer for that particular machine. This suggests to me that NULL should be zero so it can take advantage of those rules. What is the problem with changing that one define in your library? I know what you have is allowable but why not choose the value that will cause your users the least trouble? One of the biggest advantages of C is that code can be moved from system to system with very little change if it is written in a portable fashion. My goal is to maintain that portability as much as possible and that suggests that all compilers should make similar choices on things like this if possible. Some people will only care about running code on a single platform and for them these issues are not important but they are to people who want to write portable code not tied to a particular compiler on a particular platform.
Can you explain why you won't consider changing your definition of NULL to just zero? I make this same unfortunate choice a while back when I was working on a runtime for ZOG. It seems perfectly reasonable to use a void* cast since it makes an integer into a pointer. However, as you yourself have pointed out, a pointer can point to different address spaces on some processors (Harvard architecture machines mostly) so the cast just gets in the way. Some of the articles you've pointed to mention that zero can always be used to initialize a pointer (except in obscure varargs cases) and that the compiler will arrange to translate the zero into whatever bit pattern indicates a null pointer for that particular machine. This suggests to me that NULL should be zero so it can take advantage of those rules. What is the problem with changing that one define in your library? I know what you have is allowable but why not choose the value that will cause your users the least trouble? One of the biggest advantages of C is that code can be moved from system to system with very little change if it is written in a portable fashion. My goal is to maintain that portability as much as possible and that suggests that all compilers should make similar choices on things like this if possible. Some people will only care about running code on a single platform and for them these issues are not important but they are to people who want to write portable code not tied to a particular compiler on a particular platform.
David,
I'm breaking my own promise about posting when tired here - but you really are beginning to try my patience!
I don't feel a need to explain any further a design decision that is not only perfectly legitimate according to the ANSI C standard, but is also described by those who would seem to know a bit more than either of us (here) to be potentially useful for diagnostic purposes since it enables the compiler to generate a warning message about C code that may not be portable between compilers. If your goal was truly to maintain maximum portability, I would think you would be all in favor of this feature.
I'm sorry that your experience leads you to conclude that 0 is the only reasonable value for NULL, but this is simply not the case - as I have tried to point out. I am also sorry that you have grown so used to being able to use NULL when you really mean the constant value 0 on the compilers you use, but these are simply notthe same thing. At the risk of annoying you further, I will point out that this issue is also addressed in the reference I have given you several times now, in answer to question 5.10.
Seriously David, I have tried to point out reasonably politely that perhaps your understanding of these issues may be incomplete - and I have given you references to explain each point I have tried to make. I could have chosen simply to tell you to go and read the ANSI C standard itself, but given how impenetrable the specification can be, you have quite rightly complained that this would not be appropriate.
You are perfectly entitled to ask me for the rationale behind my design decisions, and I have generally been happy to oblige - even to the point of putting up with some quite rude responses from you when you disagree with them. You are also perfectly entitled to make different design decisions on your own C compiler when you eventually get around to writing one. Since Catalina is open source, you are even perfectly entitled to take Catalina and modify it to suit your own needs if you don't like specific decisions I have made.
What you are not entitled to do is raise a continual stream of fairly insignificant issues and make each one out to be some kind of fatal flaw in the design of Catalina, and then demand that I change my compiler for no reason other than just because it would better suit your own particular C programming style. Why you would do such a thing when you have already declared your intention not to use Catalina again I will leave to others to figure out.
You'll all be happy to learn that I've given up my crusade to get Ross to change the definition of NULL in Catalina so I've decided to solve it for my program using the following code:
#undef NULL
#define NULL 0
I tried this just now and I get an error message from the compiler complaining of a macro redefinition of NULL. Anyone have any idea why this might be happening? Is NULL a special identifier that can't be redefined?
You'll all be happy to learn that I've given up my crusade to get Ross to change the definition of NULL in Catalina so I've decided to solve it for my program using the following code:
#undef NULL
#define NULL 0
I tried this just now and I get an error message from the compiler complaining of a macro redefinition of NULL. Anyone have any idea why this might be happening? Is NULL a special identifier that can't be redefined?
Never mind. I guess many of the standard include files define NULL as (void *)0 so just adding my one redefinition of it is never going to work unless I can guarantee in all cases that my include file is included last. Sorry! This was foggy thinking on my part and won't work unless all redefinitions of NULL in the standard header files are protected by #ifndef NULL.
Never mind. I guess many of the standard include files define NULL as (void *)0 so just adding my one redefinition of it is never going to work unless I can guarantee in all cases that my include file is included last. Sorry! This was foggy thinking on my part and won't work unless all redefinitions of NULL in the standard header files are protected by #ifndef NULL.
Hi David,
The most (in fact the only) truly portable solution is to not use NULL when you really mean 0. The constant value 0 is a valid initializer for all pointers, including function pointers. It is a common misconception to assume that NULL is defined as a macro to mean 0, but this is compiler dependent.
However, in a spirit of goodwill, I will look at changing Catalina. I can't promise anything though, since it was not actually my decision in the first place - NULL was defined this way in the original Amsterdam Compiler Kit by Andrew Tanenbaum and Cereil Jacobs (for Minix I believe). Changing it may have other undesirable consequences.
It's an interesting point... I would have thought that NULL==0, but I have never really used it in a way that it matters...
I think that in c++, null always is 0, right? I wonder how it is in other C compilers, like Imagecraft and the PIC compiler...
The most (in fact the only) truly portable solution is to not use NULL when you really mean 0. The constant value 0 is a valid initializer for all pointers, including function pointers. It is a common misconception to assume that NULL is defined as a macro to mean 0, but this is compiler dependent.
However, in a spirit of goodwill, I will look at changing Catalina. I can't promise anything though, since it was not actually my decision in the first place - NULL was defined this way in the original Amsterdam Compiler Kit by Andrew Tanenbaum and Cereil Jacobs (for Minix I believe). Changing it may have other undesirable consequences.
Ross.
If it's part of a standard library that you're using I think it would probably be a bad idea to change it without discussing it with the original authors. I'll work around this rather minor problem. Sorry this got blown out of proportion!
It's an interesting point... I would have thought that NULL==0, but I have never really used it in a way that it matters...
I think that in c++, null always is 0, right? I wonder how it is in other C compilers, like Imagecraft and the PIC compiler...
Yes it is interesting. While null is always equated to some form of zero, there are in fact many alternatives. Here are just a few:
0
0L
(void *)0
(const *)0
All of these will have different behaviour, and none of them mean the value of a pointer assigned this value will actually have the binary value of zero. And yet the C standard does specify that you must be able to compare such a pointer with zero, and it must return the result as if it was zero. There are a few other rules that mean people naturally assume a null pointer has the binary value equivalent to the integer value 0 - but the two may not even be the same size in bits, let alone have the same value!
Is it me or is this nuts?
A pointer may not be the size of an int or any other numeric type. E.g. 8086 FAR pointers were 20 bits when ints were 16 bits.
A NULL, basically invalid pointer, may not have all bits equal zero. Many machines in the past had special values for it.
The compiler tries to be clever with the result that writing:
int * p;
p = 0;
may result in p having a binary representation that is not all bits zero!!
Or conversley writing:
int z = 0;
p = (int *)z;
may result in a pointer that is not a null pointer on some machine. I.e. it has the bits all zero when it should not.
So, if I happen to have some data that actually lives at address zero and my machine uses all bits zero to indicate NULL I have a bit of a problem.
Just seems that having the compiler automatically convert that "0" in my pointer assignment to a non zero value is very weird. No wonder null pointers get debated a lot.
Is it me or is this nuts?
A pointer may not be the size of an int or any other numeric type. E.g. 8086 FAR pointers were 20 bits when ints were 16 bits.
A NULL, basically invalid pointer, may not have all bits equal zero. Many machines in the past had special values for it.
The compiler tries to be clever with the result that writing:
int * p;
p = 0;
may result in p having a binary representation that is not all bits zero!!
Or conversley writing:
int z = 0;
p = (int *)z;
may result in a pointer that is not a null pointer on some machine. I.e. it has the bits all zero when it should not.
So, if I happen to have some data that actually lives at address zero and my machine uses all bits zero to indicate NULL I have a bit of a problem.
Just seems that having the compiler automatically convert that "0" in my pointer assignment to a non zero value is very weird. No wonder null pointers get debated a lot.
I guess a lot of it comes from the weird architectures that C has been ported to. One machine had to have a special instruction added to compare C pointers with null values, and in another machine (a lisp machine) the null pointer is not an integral value at all!
Even the 128k of hub memory we've been told to expect on P2 is small when used with the huge code sizes generated by the LMM instruction set. The executable for xbasic on a PIC24 is only about 32k but the Catalina executable for the Propeller is almost 200k.
Hi David,
Sorry - in our heated discussion over nothing last night I missed this point.
As we have since discussed elsewhere, the 200k you are seeing is the binary file size, not the C program size. When compiling with Catalina on the C3 you need to subtract around 128k.
I just compiled your xbasic program with Catalina 3.5 and the code size is about 55k with data size about 16k. I expect to get that down futher, but I doubt it will ever fit in the 32k Hub RAM available on the Prop 1. However, it will easily fit on the Prop 2!
I also tried compiling xbasic with GCC but here the boot is on the other foot - I don't understand how to interpret the sizes it reports. The xbasic.elf file is 124k but obviously that is way too big to just be the C program. Presumably you need to subtract something as I have to do with Catalina?. But then when I try loading it, it says it is loading over 180k. So presumably your loader also adds stuff (drivers, libraries etc) that are not in the xbasic.elf file? Is there an easy way to figure out the actual C program size?
At least Catalina reports the actual C program segment sizes for each compile!
Sorry - in our heated discussion over nothing last night I missed this point.
As we have since discussed elsewhere, the 200k you are seeing is the binary file size, not the C program size. When compiling with Catalina on the C3 you need to subtract around 128k.
I just compiled your xbasic program with Catalina 3.5 and the code size is about 55k with data size about 16k. I expect to get that down futher, but I doubt it will ever fit in the 32k Hub RAM available on the Prop 1. However, it will easily fit on the Prop 2!
Okay, I now understand some of the reason that the Catalina executable file is so big. It contains a second stage loader and some drivers as well as the program itself. You've pointed me to a section of your documentation describing the binary file format. I'll try to look at that more closely in the next day or so to determine how to make propeller-load able to handle Catalina binaries in addition to what it currently handles.
I also tried compiling xbasic with GCC but here the boot is on the other foot - I don't understand how to interpret the sizes it reports. The xbasic.elf file is 124k but obviously that is way too big to just be the C program. Presumably you need to subtract something as I have to do with Catalina?. But then when I try loading it, it says it is loading over 180k. So presumably your loader also adds stuff (drivers, libraries etc) that are not in the xbasic.elf file? Is there an easy way to figure out the actual C program size?
At least Catalina reports the actual C program segment sizes for each compile!
Ross.
You can get the sizes of the various sections of an ELF file using the objdump utility although you have to know how to interpret the results.
This gives a list of all of the sections in the ELF file:
propeller-elf-objdump -h xbasic.elf
This is a little verbose and contains only sections that contribute to the program size. The ELF file contains lots of other stuff including a symbol table and debugging information.
propeller-elf-objdump -p xbasic.elf
Also, I have to admit that the sizes reported by propeller-load were nearly double the correct sizes because of a bug. I've fixed the bug and have checked in the fix. It should appear in the next release. I won't go into this any more because it is off topic for this thread but you're certainly welcome to file a bug report for any problems like this that you see.
(Yikes! And some people say Catalina is complex! )
According to my GCC documentation, the .text segment contains both the code and the read-only data. For this program the read-only data is quite small (2.5k according to Catalina, but this may vary slightly between compilers). That means the code is 80k - or Is there something else in the .text segment as well?
(Yikes! And some people say Catalina is complex! )
According to my GCC documentation, the .text segment contains both the code and the read-only data. For this program the read-only data is quite small (2.5k according to Catalina, but this may vary slightly between compilers). That means the code is 80k - or Is there something else in the .text segment as well?
Ross.
My guess is that you didn't use any optimization when you compiled this. GCC does a remarkably bad job of generating efficient code when no optimization is done. This is partially to allow easier debugging. We always use at least -Os when compiling GCC programs. This optimizes for size. The section header output when compiled in that mode looks like this:
By the way, I think .cogsys0 got added to this because the full duplex serial driver was compiled in rather than the simpler no-COG serial driver that is used by default.
(Yikes! And some people say Catalina is complex! )
According to my GCC documentation, the .text segment contains both the code and the read-only data. For this program the read-only data is quite small (2.5k according to Catalina, but this may vary slightly between compilers). That means the code is 80k - or Is there something else in the .text segment as well?
Ross.
My guess is that you didn't use any optimization when you compiled this. GCC does a remarkably bad job of generating efficient code when no optimization is done. This is partially to allow easier debugging. We always use at least -Os when compiling GCC programs. This optimizes for size. The section header output when compiled in that mode looks like this:
By the way, I think .cogsys0 got added to this because the full duplex serial driver was compiled in rather than the simpler no-COG serial driver that is used by default.
Sorry - in our heated discussion over nothing last night I missed this point.
As we have since discussed elsewhere, the 200k you are seeing is the binary file size, not the C program size. When compiling with Catalina on the C3 you need to subtract around 128k.
I just compiled your xbasic program with Catalina 3.5 and the code size is about 55k with data size about 16k. I expect to get that down futher, but I doubt it will ever fit in the 32k Hub RAM available on the Prop 1. However, it will easily fit on the Prop 2!
I also tried compiling xbasic with GCC but here the boot is on the other foot - I don't understand how to interpret the sizes it reports. The xbasic.elf file is 124k but obviously that is way too big to just be the C program. Presumably you need to subtract something as I have to do with Catalina?. But then when I try loading it, it says it is loading over 180k. So presumably your loader also adds stuff (drivers, libraries etc) that are not in the xbasic.elf file? Is there an easy way to figure out the actual C program size?
At least Catalina reports the actual C program segment sizes for each compile!
Ross.
All we have to do is use the strip program to find the size of the image.
The loader bug reported in the Alpha thread had me confused on this a while and I could not confirm it even in a PM.
The loader is fixed now, but not distributed yet.
After using David's propgcc/demos/xbasic make:
$ ls -l xbasic.elf
-rwxr-xr-x 1 steve sudo 85842 Nov 9 04:58 xbasic.elf
propeller-elf-strip xbasic.elf
$ ls -l xbasic.elf
-rwxr-xr-x 1 steve sudo 52016 Nov 9 05:00 xbasic.elf
It is odd that you get 124K. Maybe you removed the optimization?
$ propeller-load -t -r xbasic.elf -b eeprom
Propeller Version 1 on /dev/ttyUSB0
Writing 4552 bytes to Propeller RAM.
Verifying ... Upload OK!
Loading cache driver
1500 bytes sent
Loading program image
49804 bytes sent
Loading .xmmkernel
1628 bytes sent
[ Entering terminal mode. Type ESC or Control-C to exit. ]
xBasic 0.001
The loader reveals the code size. The approximate load time: 16 seconds.
You appear to have missed the moral of the Titanic - the design flaws that made its sinking almost inevitable became obvious only in hindsight. Prior to that, everyone was sucked in by the hype.
I thought about "strip" (and even used it) but even this doesn't give you the correct result - you still have to know to run objdump and then get out the hex calculator - and even then it seems you can't figure out the real code size. I am not an elf expert - s this simply not possible because of the way the elf object format works?
I do think you will need make it a bit easier to figure out the various segment sizes - GCC code sizes seem to be quite large, so you are going to need a way to figure out how much you need to prune the code to get your programs to fit. I understand this is an alpha release, but so far all I get is compiler crashes or load failures once the code size exceeds some maximum permissible size.
I guess I will wait till you release the fixed loader and then try again.
I understand this is an alpha release, but so far all I get is compiler crashes or load failures once the code size exceeds some maximum permissible size.
Hi Ross,
If you post compiler crash information somewhere it would be helpful, assuming you want to be helpful. Just saying you get them is not helpful.
I've never seen a GCC compiler crash except once back in 1998. I've seen error messages that explain problems. We are producing GCC C/C++. If Parallax wants more than that, they will tell me.
I'm waiting on some information before posting another test package - it could be a few days. Meanwhile, you could follow instructions on the loader like Martin did if you're really curious.
If you post compiler crash information somewhere it would be helpful, assuming you want to be helpful. Just saying you get them is not helpful.
If I find something that has not already been posted, I will let you know. So far I have not seen anything that is not already well known. Perhaps my use of the term "crash" was too loose - I just mean when the compiler fails to compile because of some internal error. In all cases I have seen, it does indeed spit out an error message (usually something about the bss segment). I am not trying really to troubleshoot either the compiler or the loader - just understand it's output.
I've never seen a GCC compiler crash except once back in 1998. I've seen error messages that explain problems. We are producing GCC C/C++. If Parallax wants more than that, they will tell me.
Ok, I get it - like Rolls Royces don't breakdown, they just "fail to proceed". And Macs don't crash - they just "bomb". Got it now.
I'm waiting on some information before posting another test package - it could be a few days. Meanwhile, you could follow instructions on the loader like Martin did if you're really curious.
I don't see the relevance of that link. Can you elucidate?
I don't see the relevance of that link. Can you elucidate?
It's regarding building a loader image. I'm a little tired here so I guess I assumed a bit much. Sorry about that.
Martin has a clone of the repository, so all he has to do is pull/update changes and I was able to work him through some things.
I could zip you a package here with a pre-built loader if you like. I have some things to do outside though so I might be delayed.
I thought about "strip" (and even used it) but even this doesn't give you the correct result - you still have to know to run objdump and then get out the hex calculator - and even then it seems you can't figure out the real code size. I am not an elf expert - s this simply not possible because of the way the elf object format works?
I don't understand why you can't just add up the section sizes? That will give you exactly the number of bytes that the image takes in memory. The loader will report a slightly larger number because it has to add a header onto the image before downloading it. This is assuming, of course, that you use a version of the loader that doesn't have the recently reported size reporting bug. I don't see why you would hold the existence of a bug against the entire tool chain especially when the bug was fixed minutes after it was reported. This toolchain is only a few months old at this point and is only in an alpha release state.
However, you are correct that not everyone (including me) will be able to decipher every detail of an elf file dump. We should probably add a brief description of how the various file sections are used by the compiler and linker. Of course, that isn't fixed. A user is perfectly free to add sections of their own and direct the linker to place in them whatever they want. We should describe the standard sections though. In fact, it is very likely that there is already a description of things like .text, .data, and .bss in some of the many hundreds of pages of GCC documentation that already exist as part of the GCC project.
I don't understand why you can't just add up the section sizes? That will give you exactly the number of bytes that the image takes in memory. The loader will report a slightly larger number because it has to add a header onto the image before downloading it. This is assuming, of course, that you use a version of the loader that doesn't have the recently reported size reporting bug. I don't see why you would hold the existence of a bug against the entire tool chain especially when the bug was fixed minutes after it was reported. This toolchain is only a few months old at this point and is only in an alpha release state.
Just add the section sizes? How will that tell you anything if you need to figure out whether you need to prune code or data to make your program fit? I guess many people don't realize just how constraining 32k actually is for LMM programs - a random C program downloaded from the internet could easily use that much space just in error messages! But sometimes I can make a program fit in Hub RAM on the Prop just by cutting down the size of some of the strings - much easier than attempting to rewrite parts of the code. I accept your point about the toolchain being a bit immature yet - but Propeller LMM C users will face this problem almost immediately (note this is just as true of Catalina as GCC - but at least Catalina makes it easy to see where your Hub RAM has gone!).
However, you are correct that not everyone (including me) will be able to decipher every detail of an elf file dump. We should probably add a brief description of how the various file sections are used by the compiler and linker. Of course, that isn't fixed. A user is perfectly free to add sections of their own and direct the linker to place in them whatever they want. We should describe the standard sections though. In fact, it is very likely that there is already a description of things like .text, .data, and .bss in some of the many hundreds of pages of GCC documentation that already exist as part of the GCC project.
So PropGCC is currently only intended mainly for power users well versed in the innards of GCC?
It's regarding building a loader image. I'm a little tired here so I guess I assumed a bit much. Sorry about that.
Martin has a clone of the repository, so all he has to do is pull/update changes and I was able to work him through some things.
I could zip you a package here with a pre-built loader if you like. I have some things to do outside though so I might be delayed.
Thanks,
--Steve
Hi Steve,
Thanks for the offer, but don't bother - I'm ok to wait till the fixed version of the loader is released officially. This is all really at a tangent to the original discussion, which was why David thought Catalina binaries were 200k - and I think we've resolved that. The only reason I looked at GCC as part of this was I assumed that was what he was comparing the size against (which I think was true) - and it turns out they are very similar sizes under both compilers anyway.
Thanks for the offer, but don't bother - I'm ok to wait till the fixed version of the loader is released officially. This is all really at a tangent to the original discussion, which was why David thought Catalina binaries were 200k - and I think we've resolved that. The only reason I looked at GCC as part of this was I assumed that was what he was comparing the size against (which I think was true) - and it turns out they are very similar sizes under both compilers anyway.
Ross.
I actually compared Catalina's size to a PIC24. You are the one who brought up PropGCC. The only reason I even looked at the size was to try to understand why the load was taking so long. I wanted to make my loader able to handle Catalina binaries and it occurred to me that there might not be that much difference between the speeds of propeller-load and payload if payload was loading a much larger file. I was making no value judgements about your compiler. I was just trying to figure out if I could help improve its load speed. I'll admit that this was mostly for selfish reasons because I like to do serial downloads rather than swapping SD cards when I port code to Catalina.
I actually compared Catalina's size to a PIC24. You are the one who brought up PropGCC. The only reason I even looked at the size was to try to understand why the load was taking so long. I wanted to make my loader able to handle Catalina binaries and it occurred to me that there might not be that much difference between the speeds of propeller-load and payload if payload was loading a much larger file. I was making no value judgements about your compiler. I was just trying to figure out if I could help improve its load speed. I'll admit that this was mostly for selfish reasons because I like to do serial downloads rather than swapping SD cards when I port code to Catalina.
I didn't think a comparison of code size to a PIC24 was meaningful comparison for LMM Propeller code sizes, so I used the only available alternative to try and get a more reasonable one - just to try and emphasize the point that Catalina's code sizes were not excessive (if anything, they are just the opposite!)
Comments
What you don't seem to realize is that I've never used just one compiler in my entire career. I like to have my code compile and run on whatever compilers are available for the platforms I use. In the case of the Propeller, this would include Catalina, ZOG, and PropGCC. I considered buying ICC as well but it seems to be a dead product. I don't just pick one because I like to write portable code and I know that people who use my code may chose different compilers for their own reasons. I *want* my code to run on Catalina. What I don't like is to have to decorate it with lots of typecasts and alternate constructs just because they are required by one compiler. Some of this is of course always necessary since C programs are usually not perfectly portable but it would be nice if something like NULL which is sprinkled over all of my code didn't have to be modified for one target compiler.
By the way, I'm not only interested in Catalina and PropGCC. I also continue to bug Heater about getting the little-endian ZOG working because I think it offers the ability to fit more code in LMM mode which may be helpful both for Propeller-1 and Propeller-2. Even the 128k of hub memory we've been told to expect on P2 is small when used with the huge code sizes generated by the LMM instruction set. The executable for xbasic on a PIC24 is only about 32k but the Catalina executable for the Propeller is almost 200k.
David, just what is your problem here? I'm not justifying anything, nor was my post intended to rile you. Tor made a reasonable point, and I responded in kind since I thought he (and others) might possibly be interested in learning something they may not have known. If you yourself are no longer interested - even though you raised the issue in the first place - then please feel free to not respond any further.
Ross.
David, I will heed your advice and not respond to your post when I am probably overtired.
Ross.
I'm breaking my own promise about posting when tired here - but you really are beginning to try my patience!
I don't feel a need to explain any further a design decision that is not only perfectly legitimate according to the ANSI C standard, but is also described by those who would seem to know a bit more than either of us (here) to be potentially useful for diagnostic purposes since it enables the compiler to generate a warning message about C code that may not be portable between compilers. If your goal was truly to maintain maximum portability, I would think you would be all in favor of this feature.
I'm sorry that your experience leads you to conclude that 0 is the only reasonable value for NULL, but this is simply not the case - as I have tried to point out. I am also sorry that you have grown so used to being able to use NULL when you really mean the constant value 0 on the compilers you use, but these are simply not the same thing. At the risk of annoying you further, I will point out that this issue is also addressed in the reference I have given you several times now, in answer to question 5.10.
Seriously David, I have tried to point out reasonably politely that perhaps your understanding of these issues may be incomplete - and I have given you references to explain each point I have tried to make. I could have chosen simply to tell you to go and read the ANSI C standard itself, but given how impenetrable the specification can be, you have quite rightly complained that this would not be appropriate.
You are perfectly entitled to ask me for the rationale behind my design decisions, and I have generally been happy to oblige - even to the point of putting up with some quite rude responses from you when you disagree with them. You are also perfectly entitled to make different design decisions on your own C compiler when you eventually get around to writing one. Since Catalina is open source, you are even perfectly entitled to take Catalina and modify it to suit your own needs if you don't like specific decisions I have made.
What you are not entitled to do is raise a continual stream of fairly insignificant issues and make each one out to be some kind of fatal flaw in the design of Catalina, and then demand that I change my compiler for no reason other than just because it would better suit your own particular C programming style. Why you would do such a thing when you have already declared your intention not to use Catalina again I will leave to others to figure out.
Ross.
In the future I will just add the following code to my program's header file:
I guess that's not such a big pain. I'll drop this discussion now.
Hi David,
The most (in fact the only) truly portable solution is to not use NULL when you really mean 0. The constant value 0 is a valid initializer for all pointers, including function pointers. It is a common misconception to assume that NULL is defined as a macro to mean 0, but this is compiler dependent.
However, in a spirit of goodwill, I will look at changing Catalina. I can't promise anything though, since it was not actually my decision in the first place - NULL was defined this way in the original Amsterdam Compiler Kit by Andrew Tanenbaum and Cereil Jacobs (for Minix I believe). Changing it may have other undesirable consequences.
Ross.
I think that in c++, null always is 0, right? I wonder how it is in other C compilers, like Imagecraft and the PIC compiler...
Yes it is interesting. While null is always equated to some form of zero, there are in fact many alternatives. Here are just a few:
0L
(void *)0
(const *)0
All of these will have different behaviour, and none of them mean the value of a pointer assigned this value will actually have the binary value of zero. And yet the C standard does specify that you must be able to compare such a pointer with zero, and it must return the result as if it was zero. There are a few other rules that mean people naturally assume a null pointer has the binary value equivalent to the integer value 0 - but the two may not even be the same size in bits, let alone have the same value!
Ross.
A pointer may not be the size of an int or any other numeric type. E.g. 8086 FAR pointers were 20 bits when ints were 16 bits.
A NULL, basically invalid pointer, may not have all bits equal zero. Many machines in the past had special values for it.
The compiler tries to be clever with the result that writing:
int * p;
p = 0;
may result in p having a binary representation that is not all bits zero!!
Or conversley writing:
int z = 0;
p = (int *)z;
may result in a pointer that is not a null pointer on some machine. I.e. it has the bits all zero when it should not.
So, if I happen to have some data that actually lives at address zero and my machine uses all bits zero to indicate NULL I have a bit of a problem.
Just seems that having the compiler automatically convert that "0" in my pointer assignment to a non zero value is very weird. No wonder null pointers get debated a lot.
I guess a lot of it comes from the weird architectures that C has been ported to. One machine had to have a special instruction added to compare C pointers with null values, and in another machine (a lisp machine) the null pointer is not an integral value at all!
Ross.
Hi David,
Sorry - in our heated discussion over nothing last night I missed this point.
As we have since discussed elsewhere, the 200k you are seeing is the binary file size, not the C program size. When compiling with Catalina on the C3 you need to subtract around 128k.
I just compiled your xbasic program with Catalina 3.5 and the code size is about 55k with data size about 16k. I expect to get that down futher, but I doubt it will ever fit in the 32k Hub RAM available on the Prop 1. However, it will easily fit on the Prop 2!
I also tried compiling xbasic with GCC but here the boot is on the other foot - I don't understand how to interpret the sizes it reports. The xbasic.elf file is 124k but obviously that is way too big to just be the C program. Presumably you need to subtract something as I have to do with Catalina?. But then when I try loading it, it says it is loading over 180k. So presumably your loader also adds stuff (drivers, libraries etc) that are not in the xbasic.elf file? Is there an easy way to figure out the actual C program size?
At least Catalina reports the actual C program segment sizes for each compile!
Ross.
This gives a list of all of the sections in the ELF file:
This is a little verbose and contains only sections that contribute to the program size. The ELF file contains lots of other stuff including a symbol table and debugging information.
Also, I have to admit that the sizes reported by propeller-load were nearly double the correct sizes because of a bug. I've fixed the bug and have checked in the fix. It should appear in the next release. I won't go into this any more because it is off topic for this thread but you're certainly welcome to file a bug report for any problems like this that you see.
Ok - thanks! Here is what I get for xbasic.elf (note that I used 'elf-strip' to remove all the debug stuff):
(Yikes! And some people say Catalina is complex! )
According to my GCC documentation, the .text segment contains both the code and the read-only data. For this program the read-only data is quite small (2.5k according to Catalina, but this may vary slightly between compilers). That means the code is 80k - or Is there something else in the .text segment as well?
Ross.
All we have to do is use the strip program to find the size of the image.
The loader bug reported in the Alpha thread had me confused on this a while and I could not confirm it even in a PM.
The loader is fixed now, but not distributed yet.
After using David's propgcc/demos/xbasic make: It is odd that you get 124K. Maybe you removed the optimization?
The loader reveals the code size. The approximate load time: 16 seconds.
Not bad for the Titanic
You appear to have missed the moral of the Titanic - the design flaws that made its sinking almost inevitable became obvious only in hindsight. Prior to that, everyone was sucked in by the hype.
I thought about "strip" (and even used it) but even this doesn't give you the correct result - you still have to know to run objdump and then get out the hex calculator - and even then it seems you can't figure out the real code size. I am not an elf expert - s this simply not possible because of the way the elf object format works?
I do think you will need make it a bit easier to figure out the various segment sizes - GCC code sizes seem to be quite large, so you are going to need a way to figure out how much you need to prune the code to get your programs to fit. I understand this is an alpha release, but so far all I get is compiler crashes or load failures once the code size exceeds some maximum permissible size.
I guess I will wait till you release the fixed loader and then try again.
Ross.
If you post compiler crash information somewhere it would be helpful, assuming you want to be helpful. Just saying you get them is not helpful.
I've never seen a GCC compiler crash except once back in 1998. I've seen error messages that explain problems. We are producing GCC C/C++. If Parallax wants more than that, they will tell me.
I'm waiting on some information before posting another test package - it could be a few days. Meanwhile, you could follow instructions on the loader like Martin did if you're really curious.
Thanks,
--Steve
I don't see the relevance of that link. Can you elucidate?
Ross.
Martin has a clone of the repository, so all he has to do is pull/update changes and I was able to work him through some things.
I could zip you a package here with a pre-built loader if you like. I have some things to do outside though so I might be delayed.
Thanks,
--Steve
However, you are correct that not everyone (including me) will be able to decipher every detail of an elf file dump. We should probably add a brief description of how the various file sections are used by the compiler and linker. Of course, that isn't fixed. A user is perfectly free to add sections of their own and direct the linker to place in them whatever they want. We should describe the standard sections though. In fact, it is very likely that there is already a description of things like .text, .data, and .bss in some of the many hundreds of pages of GCC documentation that already exist as part of the GCC project.
Ross.
Hi Steve,
Thanks for the offer, but don't bother - I'm ok to wait till the fixed version of the loader is released officially. This is all really at a tangent to the original discussion, which was why David thought Catalina binaries were 200k - and I think we've resolved that. The only reason I looked at GCC as part of this was I assumed that was what he was comparing the size against (which I think was true) - and it turns out they are very similar sizes under both compilers anyway.
Ross.
Just thinking about possible future stuff...
Does Code:Blocks support breakpoints? I don't see any way to set breakpoints in the editor...
I remember you said small and large debugging isn't ready yet. Could you just turn off caching to allow debugging and just let it run super slow?
I didn't think a comparison of code size to a PIC24 was meaningful comparison for LMM Propeller code sizes, so I used the only available alternative to try and get a more reasonable one - just to try and emphasize the point that Catalina's code sizes were not excessive (if anything, they are just the opposite!)
Ross.