Prop GCC -fwrapv implementation and rollover handling
pmrobert
Posts: 673
I have an application where I am comparing CNT to a previously calculated value. I'm handling wraparound by casting them to a signed int as shown here.
-Mike R...
Edit: Corrected spelling error in title.
#ifdef NEWCODE if( ((int)(trl_charge2 - CNT) < 0) && (p[1] == 0)) #else if((CNT > trl_charge2) && (p[1] == 0)) #endifThe NEWCODE def'd line works properly, the rollover symptoms displayed by the #ELSE line screws up the pulse every 43 seconds @ 100mHz as expected. I think that overflow handling is undefined in C and GCC in particular so have an odd feeling about using an undefined "feature" such as it appears to be. However, while perusing the extensive list of GCC options I noted the existence of -fwrapv and -ftrapv. I guess I'm looking for a comment or opinion from the compiler wizards as to what would be the best option for reliable code? -fwrapv or not? As an aside, -ftrapv doesn't appear to do anything other than increase the code size considerably while -fwrapv generates slightly smaller code than no option at all. All 3 versions appear to work well with accurate, reliable pulse generation.
-Mike R...
Edit: Corrected spelling error in title.
Comments
In general, don't use compiler flags like -fwrapv if you can avoid it. It just hurts optimization.
It just so happens that all modern processors use 2's complement arithmetic so the overflow behaves as you would expect. It could go wrong when compiled for some weird machine that does not use 2's complement. But no one worries about that and I have never seen -fwrap used.
Actually no, overflow is not implementation defined, it's really undefined. That means the compiler/platform is free to do anything it wants in the presence of overflow. This can bite you in nasty unexpected ways. For example, consider the code: The compiler is free to optimize the final if statement way and always print "positive sum". It already knows that a and b are positive, and so (absent overflow) a+b is positive too. Overflow behavior is undefined, so the compiler doesn't have to worry about it!
This may seem perverse, and it is, but it's the reason that -fwrapv was introduced -- it changes undefined behavior (anything can happen on overflow) into implementation defined behavior (the overflow wraps 2's complement). I think this was done at the behest of Linus Torvalds, who was *not happy* when a GCC optimization produced exactly the kind of unexpected result we saw above in the Linux kernel.
Bottom line: yes, -fwrapv is probably a good option to give if you think you might be encountering signed integer overflows.
Eric
pinlow and pinhigh are defined as elsewhere. Variables not defined in this code are volatile globals which are calculated in other cogs.
-Mike R...
Eric
-Mike R.
And I tested it. With gcc version 6.3 and -O2 that is exactly what happens: 'positive sum' always. With -O0 it'll print the 'overflow' string if you add e.g. 2147483647 and 1 (but 'positive sum' if -O2).
With Intel's compiler (an old one, version 11.1) it behaves differently with optimization - it will detect the overflow (Intel's compiler is generally more aggresive than GCC when optimizing, so that's actually a surprise). IBM's xlc compiler version 13.1 will also still print 'overflow' with -O2, so that one also behaves differently from gcc. (It could also be argued, based on this, that 'gcc' is the only compiler I've tested that actually needs that -fwrapv flag.. and yes, it works, I tested that as well).