I do see that polymorphism and the resulting run time dispatch can be an overhead. And that this cannot be determined at compile time hence upsetting optimizations like inlining etc. My thesis is that if your application needs that you are going to have to do it some how anyway, with the same hit on those optimizations.
For example. You get some message from a remote machine. You parse it and build "objects" out of it's content, you then have to call something to process that data. That program flow cannot be determined at compile time. What to do? Handle the message "type" manually in C style code and dispatch to the right handler. Or let C++ sort it out with it's polymorphism and dynamic dispatch?
Just now I'm not sure if my C code or C++ code generation will be more optimal. I have not experimented with that. I suspect C++ would win.
Point is if you need that feature then you have to do it one way or another, it is not an overhead. If you don't need that feature why are you writing C++ that way?
Bjarne Stroustrup would be the first to tell you that not everything should be object oriented. But when you need it C++ can probably do it better.
When it comes to real-time space constrained systems there do seem to be a couple of C++ features that are off limits:
1) new / delete.
This implies memory allocation which is not easy to do in a way that is predictable in time. Also it becomes easy to create an application that works fine until one day some odd set or inputs arrive that cause you to run out of memory. Oops.
2) Exceptions.
Again, in general, very non-deterministic in time. Depends where the exception happened. Who called who.
I hate exceptions anyway. They seem like a really bad way of doing GOTO. Somewhere along the line high level language designers decided that GOTO was bad, thanks Dijkstra, and the they then spent the next 30 years figuring out how to reintroduce them without anyone noticing. Exceptions are one result of this.
I wonder if exceptions eat up code space for a similar reason to what was causing the problem with pure virtual functions. There may be lots of code involved with reporting issues with exceptions that were not handled by the user code. If you could stub out that last-chance handler you might be able to use exceptions in PropGCC.
...without exceptions you have to do setjump stuff.
Yes you do. setjump/longjump are just big goto. As I said, years ago GOTO became taboo and high level language designers have been inventing ways to reintroduce them in a camoflaged way ever since. Exceptions is one result of that.
As long as you write the code carefully it works fine.
Yes exactly. The emphasis here is on "carefully". Same applies to the good old assembler JMP that we started with or the HLL GOTO. Without care you get the same error prone spagetti as using goto.
Using exceptions can make code that has to be able to handle errors at many levels much easier. Like SPINs abort command.
True. In Spin you are not normally in the situation of leaking resources like memory or file handles etc.
Now, I'm not going to say "exceptions should be off the table for space constrained real-time embedded systems". The question is does C++ allow one to use "exceptions" in a way that is not eating memory or time over what you might code manually by other means? If you need the feature and C++ can do it best then there is no C++ overhead in using exceptions.
In that context C++ exceptions are actually off the table for projects like the Joint Fight Striker software because they introduce an unknown into execution time. As do "new" and "delete".
Bottom line here is: What features of C++ can we use in Propeller projects and still have it fit in 32KB?
Using exceptions can make code that has to be able to handle errors at many levels much easier. Like SPINs abort command.
Abort made the FATEngine possible, without it there would have been so much work to handle errors in every function
I somehow agree and disagree with you there at the same time.
if you would return negative (negated?) string addresses as error strings on aborts then things in spin would be easier...
when I do a myvar := \fat.xxx I do not know if myvar is now a address of the errorstring or the correct return value of the called function. How to solve this?
The question is does C++ allow one to use "exceptions" in a way that is not eating memory or time over what you might code manually by other means? If you need the feature and C++ can do it best then there is no C++ overhead in using exceptions.
Doesn't C++ require destructors to be called as the exception stack is unwound? And the list of destructors has to be determined at run time?
I tried to get exceptions working in PropGCC, but couldn't get anything useful. I don't remember the specific issue that finally caused me to drop it. Exceptions would have made working with FSRW so much easier.
Yes. That is why exceptions are not allowed in the coding guidelines for the Joint Strike Fighter. Exceptions introduce non-deterministic run time.
I have always said. only half jokingly, that the C++ language has grown so big and complex that no single human could possibly understand all of it. Recently I thought I should catch up with the new C++11x standard so I started watching the "going native" videos on YouTube. I had to chuckle when seeing that even Bjarne Stroustrup himself was surprised at a couple of things C++ did or did not do.
If you want to be frightened about how complex c++ is and how your programs can go badly wrong in hard to understand ways as a result, watch this video "Clang: Defending C++ from Murphy's Million Monkeys" by Google's Chandler Carruth. http://http://www.youtube.com/watch?v=NURiiQatBXA
It's an hour long but very eyeopening, informative and entertaining.
Just FYI, since I didn't see anyone say why this __cxa_pure_virtual() function exists or why it gets compiled in when using pure virtuals, it's called when you try to call virtual functions which are pure in the base class during construction or destruction (and it's reporting the error). You may want to make it do something other than be an infinite loop.
Calling a virtual function from inside of a constructor or destructor is very bad, because a more derived version of the class might not yet be constructed or might already be destructed. You'll end up calling the wrong virtual function, the one in the current class derivation or in a base derivation. In the case of the current or base class version of the function being pure virtual, you end up calling __cxa_pure_virtual().
Also, on the whole C++ is bigger and slower than C front. It's been my experience that C++ produces smaller and faster code than C with much less effort. Of course, in simple contrived cases you can gather evidence of the opposite, but in real world code that does something significant you'll have more trouble. The size issues are largely down to standard libraries and extra features (rtti and exception handling). If you were to have the same standard library features and extra C++ features implemented in C, then the size would be similar if not worse in C. Obviously, you can turn off rtti and exception handling. You can also avoid large swaths (like STL) of the standard libraries to save on code size when you don't need it. As for performance, the only argument people can really make is that C++ has all the virtual table stuff and that's added code because of the indirection. However, if you look at most sufficiently complex C code it has a bunch of jump tables and indirection also, because well it kind of makes sense to do things that way in a lot of cases.
So really it's just down to using the language properly. Yes, you can more easily write bad, bloated, slow code that is hard to work with, but I would argue that you can also more easily write good, fast, reusable code when you know it better.
I find that my C++ code accomplishes a lot more with less lines of code, and is easier to work with when I want to add more later on, and also easier to reuse (I have classes and templates I have written that I have used in hundreds of projects without needed to change a single thing in them). Of course, I've been using C++ since 1987. I used Lattice C++ on the Amiga 500. It was actually just a preprocessor (cfront) that converted the C++ code into C and then compiled it with with their C compiler.
Thanks for that explanation of __cxa_pure_virtual(). Are there any other such error catching functions C++ might sneak in that we should know about?
sounds like getting __cxa_pure_virtual() to light up a big red LED would be fun. Or have it spit something out on stdout.
Still I have the problem that taking care of __cxa_pure_virtual() in my Raspberry Pi or PC builds does not reduce size. Still hundreds of KB. Building with -nostdlib does get me a 2K elf file but it won't run as it has no main()
anymore!
What other "secret" functions can I short circuit on those platforms?
I think that a lot of perception of C++ as slow and bloated exactly comes from this phenomena of the half megabyte "Hello World" programs. When you can make a smaller "Hello World" JavaScript, including the JS interpreter required to run it then something is very wrong.
As for "using the language properly" I think C++ has suffered from the over popularity and emphasis on object oriented programming in the past copule of decades. This has caused many programmers to classes and inheritance etc everywhere inappropriately because "that is the correct thing to do". Even Bjarne says object orientation has gone too far quite often.
Of course it is impossible to use C++ properly because it is so big and complicated you can never tell if you are using it properly or not:)
The way you're supposed to use the FATEngine is to run your code from an exception handler... I think the demo code I ship with it demonstrates that. Only one place in your code should trap aborts:
PUB abortHandler
result := /actualCode
if(fat.partitionError)
' Handle error
PUB actualCode
' Do stuff without needing abort traps... because if anything goes wrong this function is aborted...
Now, I agree that the library could be written better. Please note that I wrote that will I was a sophomore in College. Much time has passed .
On the subject of C++ vs C: I've been programming OO software in mostly C (not C++) for over 10 years; when I started on the project, the Powers That Be didn't like how easy it is to introduce bugs in C++ and how easy it is to make programs totally unreadable, or make them appear to do things while they're really doing other things. So basically I just shrugged and went along with the rule: No C++.
Being forced to write in C really makes you think (and learn) about how far you can get writing Object-Oriented Software without having to switch to C++:
- Encapsulation can be accomplished by declaring but not defining struct types and using pointers to them in public functions ("opaque structs").
- Polymorphism can be accomplished by using callback functions (and I always recommend using two-stage function pointer typedefs e.g. typedef int myfunctype(int); typedef myfunctype *myfuncptr; instead of typedef int (*myfuncptr)(int); )
- Inheritance is a little more difficult because it's basically declaring struct Base inside struct Derived and with opaque structs, you can't really do it as efficiently as in C++ but it can be done. Fortunately this is not a common requirement.
In contrast, I recenty had to implement a library in C++ and spent several days to find out why my virtual function void somefunction(const sometype *) wasn't being called; it turns out I should have declared it void somefunction(sometype *) without the const modifier for the parameter. With the const, it wasn't picked up by the compiler as an override for the base virtual void somefunction so the code just ended up calling the base class function instead of my subclass function. Aaargh!
I really like most of C++ and I like the new features in C++11, but I have the feeling that C++ is more prone to introduction of bugs that are hard to solve (or even detect!) than C, so for large projects, the time that you gain by writing the code in C++ instead of C is lost because it takes longer to debug.
And exceptions... well... on good days they make it difficult to follow all the possible execution paths, on bad days they should be taken behind the shed and put out of our misery. I think they are the worst invention since "goto".
Just FYI, since I didn't see anyone say why this __cxa_pure_virtual() function exists or why it gets compiled in when using pure virtuals, it's called when you try to call virtual functions which are pure in the base class during construction or destruction (and it's reporting the error). You may want to make it do something other than be an infinite loop.
Given the microcontroller environment, what would be a good replacement that won't force use of libraries we may not want to include? I understand the explanation of the use of that function but there isn't for sure going to be a console and nobody wants it grabbing an I/O pin by default I wouldn't think. Can you abort() with little overhead?
Heater,
There are a whole host of __cxa_* functions that get included for exception handling, you can google for them. There's also the normal automatic default constructor, copy constructors, move constructors, copy assignment operator, move assignment operator, and destructors (when you don't declare your own). I'm sure there are probably plenty of compiler specific and runtime specific things too. atexit code to destruct global scope classes, etc. Most of them are not something you want to override or change because they would break things.
jac_goudsmit,
In C++11 they introduces the override keyword to help with the very problem you described. When you use it, it causes the compiler to verify that your function signature matches with a virtual function signature in the base class.
Anyway, regarding using C++ properly, yeah it's easy to not do if you try using all the wizbang features. I consider proper use of C++ to be not using all of it (some would say most of it). Things I do to stay sane: never use STL or boost (ugh), very rarely use operator overloading (primarily only for math classes and container classes ([] for indexed element access, etc)), never change the meaning of an operator (like cout/cin do with << and >>), don't use rtti, minimal use of exceptions (at work we use google's breakpad stuff around the whole program), no multiple inheritance except for pure virtual interface classes (especially no diamond inheritance), very little of the new C++11 stuff (although override is awesome), USE const appropriately (not everywhere, but certainly in a lot of places), and I use templates but I don't go overboard with them.
Also, I agree about OOP being overdone in C++. I do use classes a fair bit, but I also do a fair bit of procedural coding. You can see that in openspin, although I am in the process of refactoring things there it's not going to change super significantly in OOP vs procedural content.
Roy,
I agree that normally when programming on the desktop or RPi I would not want to stub out any of those hidden run time functions. It just bugs me not knowing what they are and how to deal with them if I want to. This thread shows what can be done with propgcc I should be able to do the same on other platforms.
The problem with wanting to use C++ properly and trying to carefully draw a line around what features you use and how is that it is impossible.
If you want to write a Qt application you will end up doing it the Qt way. If you join a project that uses std:: or boost you will end up writing that way. They look and feel very different. That's before we get onto "modern" projects in C++11.
I'm looking to make an "interface" type class using C++, and came across this SO question.. Based on the answers there and here I've come up with the following program (using libpropeller serial class for I/O, but that doesn't matter too much here):
#include "libpropeller/serial/serial.h"
extern "C" void __cxa_pure_virtual() { while (1); }
class Interface
{
public:
virtual ~Interface(){}
virtual void InterfaceMethod(Serial * temp) = 0;
};
class Child : public Interface
{
public:
virtual void InterfaceMethod(Serial * temp){
temp->Put("Override class\r\n");
}
};
int main(){
Serial debug;
debug.Start(31,30,115200);
debug.Put("Hello World\r\n");
Child c;
c.InterfaceMethod(&debug);
debug.Put((char)0xFF);
debug.Put((char)0x00);
debug.Put((char)0);
return 0;
}
As written, the download size is 3516 bytes. If I comment out the "virtual ~Interface(){}" line I get 3012 bytes. I get 3012 bytes if I continue by commenting out the "extern" line. If I remove the classes entirely (via commenting out the Child C line and following) I get 2960 bytes.
So, it looks like we can have a class with virtual functions for only 3012-2960=52 bytes. I'm guessing that the virtual destructor brings in the vtable, which is where the next 504 bytes come from.
From the SO question it looks like as long as I never use "delete" on the interface class it is safe to remove the virtual destructor. Is this correct?
Moving forward I think the best option for an interface is to not define the __cxa_pure_virtual function and to not define the Interface destructor. This has two benefits:
1. No vtable (space savings and no performance loss)
2. Compile warning (via program too big warning) if somebody tries to define a non-pure virtual function in the interface method.
One downside to the interface method seems to be that it pulls in the definitions of all the interface functions, even if they are not used. I wrote the following interface for a serial port:
class StreamInterface{
public:
virtual void Put(const char character) = 0;
virtual int Put(const char * buffer_ptr, const int count) = 0;
virtual int Put(const char * buffer_ptr) = 0;
virtual int PutFormatted(const char * format, ...) = 0;
virtual int Get(const int timeout = -1) = 0;
virtual int Get(char * const buffer, const int length, const int timeout = -1) = 0;
virtual int Get(char * const buffer, const char terminator = '\n') = 0;
};
If I then compile with this it pulls in 1500 bytes. In this case, 1000 of those bytes come from the PutFormatted method. If I use that function in main then the code size only goes up by a few tens of bytes. If I then remove the virtual declaration in the interface then the code size stays about the same.
All this is without using the StreamInterface class at all. My theory is that the compiler must always have all definitions of the pure virtual functions available, since if StreamInterface is passed as a pointer it doesn't know if the class implementing it is Class A or Class B or something else.
This puts a crimp on the usefulness of the interface pattern, since it comes at the cost of code size. Of course there is no real cost when you use it to the full extent, but it means that you can't define "general purpose" interfaces and have it be size efficient (since each child class's implementation will always be pulled in).
I've never had interface classes with anything but virtual method declarations. No constructor or destructor ever declared/defined in them. Not sure why you would ever want that, the interface class has no data and no actual method code. At least that's my usage/opinion/whatever.
It can be very important to have a virtual destructor in your base class.
If you have an interface class and a derived implementation class then you can pass around pointers of the interface (base) type.
If you delete such an object via a pointer to it's base type the implementations destructor will not be called and you may get memory and resource leaks. Try out the following code, with and with out the virtual destructor in the base class.
#include <iostream>
using namespace std;
class Interface
{
public:
virtual ~Interface() {} // Required to ensure implementation's...
// ...destructors are called
virtual void method() = 0;
};
class Impl : public Interface
{
public:
virtual ~Impl() {
cout << "~Impl:" << endl;
}
virtual void method() {
cout << "method:" << endl;
}
};
int main() {
Impl* impl = new Impl();
Interface* iface = impl;
iface->method();
delete iface; // impl's destructor not called here if base...
// ...class has default destructor!
return 0;
}
You had better revisit all your code and check if you need any virtual destructors in base classes.
Or you can just recompile everything with clang and it will give you a polite warning about that (and many other things):
$ clang -Wall -o interface interface.cpp -lstdc++interface.cpp:34:5: warning: delete called on 'Interface' that is abstract but has non-virtual destructor
Edit: Actually GCC gives that same warning with -Wall.
Yeah Heater, I understand the implications of having and not having virtual destructors. However, I generally don't delete via interfaces, and if I do, then I do it via some indirect method which gets to the actual class via some mechanism (for example a ref counting base class).
In my book, interfaces are interfaces, not classes. In general, you use them to talk to an object, you don't use them to create or destroy the object. Often you get an interface to an object from some create object function, and you either Release() it or you call an appropriate destroy function.
On occasion I'll have a base class that is interface-like, but will have some functionality built into it. I don't think of these as interfaces, but just abstract base classes. In that case, obviously, the destructor will be virtual.
The SO question that I linked to has lot's of information about why a destructor is needed. I omitted it from the StreamInterface code that I posted based on the restriction that you never call "delete baseclass".
The only alternative for an interface to pure virtual base classes that I have found is the "Curiously Recurring Template Pattern", also called "Simulated Dynamic Binding":
StreamInterface.h:
template <typename Deriving>
class StreamInterface{
public:
int Put(const char * buffer_ptr){
return static_cast<Deriving*>(this)->Put(buffer_ptr);
}
int Put(const char c){
return static_cast<Deriving*>(this)->Put(c);
}
};
Serial.h:
class Serial : public StreamInterface<Serial>{
main.cpp:
Serial debug;
debug.Start(31,30, 115200);
debug.Put("Hello World\r\n");
//debug.PutFormatted("Hello, %s", "World");
StreamInterface<Serial> * si = &debug;
si->Put("Hello, StreamInterface");
Unfortunately, this is only good when you know at compile time what "type" you want. Overall, it doesn't seem that useful to me. It seems that it's best for enforcing naming conventions (ie. all output must be done with the "Put(char c)" method, and not the "out(const char c)" method or whatever) and not for dynamically plugging different outputs.
It is very efficient, however. Download size for the above example is 2996 bytes (as opposed to 2928 bytes without the StreamInterface).
You are right, one does not need a virtual destructor if one does not delete via a base class. Trouble is it's this kind of sneaky C++ thing that is waiting there to catch a billion programmers like me who haven't worked out the rule yet of just forget and make a mistake.
Luckily with -Wall -Werror and new C++11 features the compiler has become a lot better at stopping us from making such mistakes. For example replacing the pointers in my example with shared_ptr forces one to get rid of the delete and object life time is taken care of nicely, the virtual destructor is not even required any more as far as I can tell.
Clang makes C++ even more user friendly.
I do agree that interfaces are not / should not be classes. In an abstract way I can have ThingA and ThingB that just happen to have that same knobs and dials but in no other way related. In C++ we end up having to forcibly relate them by inheriting from a base class.
I'm curious as to why you have C++11 features in you "do not use list". Which parts are dangerous?
I have only been looking at C++11 features recently and it seems to have made great strides in making C++ programming easier and less error prone. I'm thinking, "auto", "override", range for loops, shared_ptr, lambdas, move semantics...After a few days using that stuff I could not go back to the old C++ style, it's so much nicer.
Also why the "never use stl"? Isn't it better to use standardized containers and algorithms rather than inventing your own? What's the catch? Perhaps I have not used it enough to find out yet.
Well, except of course sheer size becomes the problem for embedded work as we are discussing here.
Roy,
Also why the "never use stl"? Isn't it better to use standardized containers and algorithms rather than inventing your own? What's the catch? Perhaps I have not used it enough to find out yet.
Well, except of course sheer size becomes the problem for embedded work as we are discussing here.
I think that's the long and short of it. Try to create a vector in one of your projects and see what happens. Also STL containers and heap storage go hand-in-hand. Since heap is to be avoided in these tiny run-time memory environments that would also seem to be a strike against STL containers.
Yeah Heater, I understand the implications of having and not having virtual destructors. However, I generally don't delete via interfaces, and if I do, then I do it via some indirect method which gets to the actual class via some mechanism (for example a ref counting base class).
In my book, interfaces are interfaces, not classes. In general, you use them to talk to an object, you don't use them to create or destroy the object. Often you get an interface to an object from some create object function, and you either Release() it or you call an appropriate destroy function.
On occasion I'll have a base class that is interface-like, but will have some functionality built into it. I don't think of these as interfaces, but just abstract base classes. In that case, obviously, the destructor will be virtual.
That's interesting, You have given me something to consider there. I make interfaces that are intended to be used the same way, a concrete object is created and the interface is used only to turn the knobs, but I think some of my classes may only have the pointer to the interface base class but are still expected to mop up the concrete object when they are destroyed as they contain it. I'll need to ponder on this a bit.
Heater,
The primary reason I don't utilize C++11 stuff yet is that it's not well supported across compilers/platforms yet. Being in the games industry means platforms includes consoles. This is actually one of the main reasons I don't use STL, also. Believe it or don't, but STL is not fully compatible across different compilers (even GNU to VC++, let alone to consoles). The other big reason for no STL is that I think the design/interface is terrible, and they make using your own allocators extremely painful (while having extremely BAD allocation patterns by default). In general, STL perf is terrible unless you wrangle the allocators. Not to mention that it's uniform interface makes it easy for people to abuse in horrible perf killing ways (like linear iterating containers that are slow at it, or random accessing ones that are slow at that). Then there's the bloat and how it makes debugging painful... and I won't even go into the horror that is boost.
Also, regarding interfaces, my general usage pattern for them is for APIs to things that don't get created and destroyed much. Usually once at startup and shutdown. For objects that have transient lifetimes, I'm often using the refcounted base and/or abstract base classes that are not pure interfaces. The other way I use interfaces is to "add on" to a class and/or to allow decoupling. So I have an object, and I want some outside function or library to be able to interact with it without needing to know about the whole object derivation. So the interface is not a base in the sense that it's valid to think of the pointer as a reference to the object. For example, I have a Serializable interface that only has two pure methods implemented (Serialize and Unserialize), and I'll have every object I want to be serializable include that interface. Then I can pass those objects to a function that can work with Serializable objects.That function doesn't create or destroy anything, it just uses the serializable interface. In any case, if I have an "interface" that I want to actually be able to be a base deletable object pointer then it stops being a pure interface and becomes an abstract base class with appropriate virtual destructor and so on.
...
class Serial : public InputStream<Serial>, public OutputStream<Serial>{
...
The usefulness of CRTP occurs when you templatize the functions that use the patterns. Then, as long as you know at compile time in just one location which version of the base class you want to use the rest of it just falls into place.
It doesn't look that nice, but it's efficient and relatively clean for the end user. There's little or no overhead from what I can tell.
OK, noted the CRTP. Another piece of C++ weirdness I can add to my collection:)
The Chrome thing is driving me nuts, it's almost impossible to post to this forum with Chrome on this Debian box. Even hitting the "post reply" button hangs forever. Oddly on another Debian + Chrome machine in my office it works fine. There must be some weird config difference somewhere.
SRLM,
That Mobius Template thing (CRTP) was about creating interfaces without virtual methods. Here is another way to do it, It does not even us inheritance.
// An interface definition with no inheritance, using C++11 lambdas.
struct Interface
{
template<class T>
explicit Interface(T& other)
: foo ([=](int param) mutable { return other.foo(param); }),
bar ([=](int param) mutable { return other.bar(param); })
/*...*/
{}
const function<int(int)> foo;
const function<string(int param)> bar;
// ...
};
// A class that implements the interface.
class Impl
{
public:
int foo(int param)
{
cout << "Impl2::foo" << endl;
return (param + 1);
}
string bar(int param)
{
cout << "Impl2::bar" << endl;
return ("ret value");
}
};
....
// And use it like so:
Impl impl;
Interface iface(impl);
cout << iface.foo(1) << endl;
cout << iface.bar(2) << endl;
Seems this is faster than using inheritance but eats more memory.
As an exercise this can be done without C++ lambdas, use functor classes instead.
Comments
I do see that polymorphism and the resulting run time dispatch can be an overhead. And that this cannot be determined at compile time hence upsetting optimizations like inlining etc. My thesis is that if your application needs that you are going to have to do it some how anyway, with the same hit on those optimizations.
For example. You get some message from a remote machine. You parse it and build "objects" out of it's content, you then have to call something to process that data. That program flow cannot be determined at compile time. What to do? Handle the message "type" manually in C style code and dispatch to the right handler. Or let C++ sort it out with it's polymorphism and dynamic dispatch?
Just now I'm not sure if my C code or C++ code generation will be more optimal. I have not experimented with that. I suspect C++ would win.
Point is if you need that feature then you have to do it one way or another, it is not an overhead. If you don't need that feature why are you writing C++ that way?
Bjarne Stroustrup would be the first to tell you that not everything should be object oriented. But when you need it C++ can probably do it better.
When it comes to real-time space constrained systems there do seem to be a couple of C++ features that are off limits:
1) new / delete.
This implies memory allocation which is not easy to do in a way that is predictable in time. Also it becomes easy to create an application that works fine until one day some odd set or inputs arrive that cause you to run out of memory. Oops.
2) Exceptions.
Again, in general, very non-deterministic in time. Depends where the exception happened. Who called who.
I hate exceptions anyway. They seem like a really bad way of doing GOTO. Somewhere along the line high level language designers decided that GOTO was bad, thanks Dijkstra, and the they then spent the next 30 years figuring out how to reintroduce them without anyone noticing. Exceptions are one result of this.
Using exceptions can make code that has to be able to handle errors at many levels much easier. Like SPINs abort command.
Abort made the FATEngine possible, without it there would have been so much work to handle errors in every function.
Now, I'm not going to say "exceptions should be off the table for space constrained real-time embedded systems". The question is does C++ allow one to use "exceptions" in a way that is not eating memory or time over what you might code manually by other means? If you need the feature and C++ can do it best then there is no C++ overhead in using exceptions.
In that context C++ exceptions are actually off the table for projects like the Joint Fight Striker software because they introduce an unknown into execution time. As do "new" and "delete".
Bottom line here is: What features of C++ can we use in Propeller projects and still have it fit in 32KB?
I somehow agree and disagree with you there at the same time.
if you would return negative (negated?) string addresses as error strings on aborts then things in spin would be easier...
when I do a myvar := \fat.xxx I do not know if myvar is now a address of the errorstring or the correct return value of the called function. How to solve this?
Enjoy!
Mike
Doesn't C++ require destructors to be called as the exception stack is unwound? And the list of destructors has to be determined at run time?
I tried to get exceptions working in PropGCC, but couldn't get anything useful. I don't remember the specific issue that finally caused me to drop it. Exceptions would have made working with FSRW so much easier.
Yes. That is why exceptions are not allowed in the coding guidelines for the Joint Strike Fighter. Exceptions introduce non-deterministic run time.
I have always said. only half jokingly, that the C++ language has grown so big and complex that no single human could possibly understand all of it. Recently I thought I should catch up with the new C++11x standard so I started watching the "going native" videos on YouTube. I had to chuckle when seeing that even Bjarne Stroustrup himself was surprised at a couple of things C++ did or did not do.
If you want to be frightened about how complex c++ is and how your programs can go badly wrong in hard to understand ways as a result, watch this video "Clang: Defending C++ from Murphy's Million Monkeys" by Google's Chandler Carruth.
http://http://www.youtube.com/watch?v=NURiiQatBXA
It's an hour long but very eyeopening, informative and entertaining.
Calling a virtual function from inside of a constructor or destructor is very bad, because a more derived version of the class might not yet be constructed or might already be destructed. You'll end up calling the wrong virtual function, the one in the current class derivation or in a base derivation. In the case of the current or base class version of the function being pure virtual, you end up calling __cxa_pure_virtual().
Also, on the whole C++ is bigger and slower than C front. It's been my experience that C++ produces smaller and faster code than C with much less effort. Of course, in simple contrived cases you can gather evidence of the opposite, but in real world code that does something significant you'll have more trouble. The size issues are largely down to standard libraries and extra features (rtti and exception handling). If you were to have the same standard library features and extra C++ features implemented in C, then the size would be similar if not worse in C. Obviously, you can turn off rtti and exception handling. You can also avoid large swaths (like STL) of the standard libraries to save on code size when you don't need it. As for performance, the only argument people can really make is that C++ has all the virtual table stuff and that's added code because of the indirection. However, if you look at most sufficiently complex C code it has a bunch of jump tables and indirection also, because well it kind of makes sense to do things that way in a lot of cases.
So really it's just down to using the language properly. Yes, you can more easily write bad, bloated, slow code that is hard to work with, but I would argue that you can also more easily write good, fast, reusable code when you know it better.
I find that my C++ code accomplishes a lot more with less lines of code, and is easier to work with when I want to add more later on, and also easier to reuse (I have classes and templates I have written that I have used in hundreds of projects without needed to change a single thing in them). Of course, I've been using C++ since 1987. I used Lattice C++ on the Amiga 500. It was actually just a preprocessor (cfront) that converted the C++ code into C and then compiled it with with their C compiler.
Thanks for that explanation of __cxa_pure_virtual(). Are there any other such error catching functions C++ might sneak in that we should know about?
sounds like getting __cxa_pure_virtual() to light up a big red LED would be fun. Or have it spit something out on stdout.
Still I have the problem that taking care of __cxa_pure_virtual() in my Raspberry Pi or PC builds does not reduce size. Still hundreds of KB. Building with -nostdlib does get me a 2K elf file but it won't run as it has no main()
anymore!
What other "secret" functions can I short circuit on those platforms?
I think that a lot of perception of C++ as slow and bloated exactly comes from this phenomena of the half megabyte "Hello World" programs. When you can make a smaller "Hello World" JavaScript, including the JS interpreter required to run it then something is very wrong.
As for "using the language properly" I think C++ has suffered from the over popularity and emphasis on object oriented programming in the past copule of decades. This has caused many programmers to classes and inheritance etc everywhere inappropriately because "that is the correct thing to do". Even Bjarne says object orientation has gone too far quite often.
Of course it is impossible to use C++ properly because it is so big and complicated you can never tell if you are using it properly or not:)
The way you're supposed to use the FATEngine is to run your code from an exception handler... I think the demo code I ship with it demonstrates that. Only one place in your code should trap aborts:
Now, I agree that the library could be written better. Please note that I wrote that will I was a sophomore in College. Much time has passed .
Being forced to write in C really makes you think (and learn) about how far you can get writing Object-Oriented Software without having to switch to C++:
- Encapsulation can be accomplished by declaring but not defining struct types and using pointers to them in public functions ("opaque structs").
- Polymorphism can be accomplished by using callback functions (and I always recommend using two-stage function pointer typedefs e.g. typedef int myfunctype(int); typedef myfunctype *myfuncptr; instead of typedef int (*myfuncptr)(int); )
- Inheritance is a little more difficult because it's basically declaring struct Base inside struct Derived and with opaque structs, you can't really do it as efficiently as in C++ but it can be done. Fortunately this is not a common requirement.
In contrast, I recenty had to implement a library in C++ and spent several days to find out why my virtual function void somefunction(const sometype *) wasn't being called; it turns out I should have declared it void somefunction(sometype *) without the const modifier for the parameter. With the const, it wasn't picked up by the compiler as an override for the base virtual void somefunction so the code just ended up calling the base class function instead of my subclass function. Aaargh!
I really like most of C++ and I like the new features in C++11, but I have the feeling that C++ is more prone to introduction of bugs that are hard to solve (or even detect!) than C, so for large projects, the time that you gain by writing the code in C++ instead of C is lost because it takes longer to debug.
And exceptions... well... on good days they make it difficult to follow all the possible execution paths, on bad days they should be taken behind the shed and put out of our misery. I think they are the worst invention since "goto".
===Jac
uups. I did miss that one. now all makes sense. Thanks.
sorry for hijacking ...
Mike
Given the microcontroller environment, what would be a good replacement that won't force use of libraries we may not want to include? I understand the explanation of the use of that function but there isn't for sure going to be a console and nobody wants it grabbing an I/O pin by default I wouldn't think. Can you abort() with little overhead?
There are a whole host of __cxa_* functions that get included for exception handling, you can google for them. There's also the normal automatic default constructor, copy constructors, move constructors, copy assignment operator, move assignment operator, and destructors (when you don't declare your own). I'm sure there are probably plenty of compiler specific and runtime specific things too. atexit code to destruct global scope classes, etc. Most of them are not something you want to override or change because they would break things.
jac_goudsmit,
In C++11 they introduces the override keyword to help with the very problem you described. When you use it, it causes the compiler to verify that your function signature matches with a virtual function signature in the base class.
Anyway, regarding using C++ properly, yeah it's easy to not do if you try using all the wizbang features. I consider proper use of C++ to be not using all of it (some would say most of it). Things I do to stay sane: never use STL or boost (ugh), very rarely use operator overloading (primarily only for math classes and container classes ([] for indexed element access, etc)), never change the meaning of an operator (like cout/cin do with << and >>), don't use rtti, minimal use of exceptions (at work we use google's breakpad stuff around the whole program), no multiple inheritance except for pure virtual interface classes (especially no diamond inheritance), very little of the new C++11 stuff (although override is awesome), USE const appropriately (not everywhere, but certainly in a lot of places), and I use templates but I don't go overboard with them.
Also, I agree about OOP being overdone in C++. I do use classes a fair bit, but I also do a fair bit of procedural coding. You can see that in openspin, although I am in the process of refactoring things there it's not going to change super significantly in OOP vs procedural content.
I agree that normally when programming on the desktop or RPi I would not want to stub out any of those hidden run time functions. It just bugs me not knowing what they are and how to deal with them if I want to. This thread shows what can be done with propgcc I should be able to do the same on other platforms.
The problem with wanting to use C++ properly and trying to carefully draw a line around what features you use and how is that it is impossible.
If you want to write a Qt application you will end up doing it the Qt way. If you join a project that uses std:: or boost you will end up writing that way. They look and feel very different. That's before we get onto "modern" projects in C++11.
As written, the download size is 3516 bytes. If I comment out the "virtual ~Interface(){}" line I get 3012 bytes. I get 3012 bytes if I continue by commenting out the "extern" line. If I remove the classes entirely (via commenting out the Child C line and following) I get 2960 bytes.
So, it looks like we can have a class with virtual functions for only 3012-2960=52 bytes. I'm guessing that the virtual destructor brings in the vtable, which is where the next 504 bytes come from.
From the SO question it looks like as long as I never use "delete" on the interface class it is safe to remove the virtual destructor. Is this correct?
Moving forward I think the best option for an interface is to not define the __cxa_pure_virtual function and to not define the Interface destructor. This has two benefits:
1. No vtable (space savings and no performance loss)
2. Compile warning (via program too big warning) if somebody tries to define a non-pure virtual function in the interface method.
If I then compile with this it pulls in 1500 bytes. In this case, 1000 of those bytes come from the PutFormatted method. If I use that function in main then the code size only goes up by a few tens of bytes. If I then remove the virtual declaration in the interface then the code size stays about the same.
All this is without using the StreamInterface class at all. My theory is that the compiler must always have all definitions of the pure virtual functions available, since if StreamInterface is passed as a pointer it doesn't know if the class implementing it is Class A or Class B or something else.
This puts a crimp on the usefulness of the interface pattern, since it comes at the cost of code size. Of course there is no real cost when you use it to the full extent, but it means that you can't define "general purpose" interfaces and have it be size efficient (since each child class's implementation will always be pulled in).
It can be very important to have a virtual destructor in your base class.
If you have an interface class and a derived implementation class then you can pass around pointers of the interface (base) type.
If you delete such an object via a pointer to it's base type the implementations destructor will not be called and you may get memory and resource leaks. Try out the following code, with and with out the virtual destructor in the base class. You had better revisit all your code and check if you need any virtual destructors in base classes.
Or you can just recompile everything with clang and it will give you a polite warning about that (and many other things):
Edit: Actually GCC gives that same warning with -Wall.
In my book, interfaces are interfaces, not classes. In general, you use them to talk to an object, you don't use them to create or destroy the object. Often you get an interface to an object from some create object function, and you either Release() it or you call an appropriate destroy function.
On occasion I'll have a base class that is interface-like, but will have some functionality built into it. I don't think of these as interfaces, but just abstract base classes. In that case, obviously, the destructor will be virtual.
The only alternative for an interface to pure virtual base classes that I have found is the "Curiously Recurring Template Pattern", also called "Simulated Dynamic Binding":
StreamInterface.h:
Serial.h:
main.cpp:
Unfortunately, this is only good when you know at compile time what "type" you want. Overall, it doesn't seem that useful to me. It seems that it's best for enforcing naming conventions (ie. all output must be done with the "Put(char c)" method, and not the "out(const char c)" method or whatever) and not for dynamically plugging different outputs.
It is very efficient, however. Download size for the above example is 2996 bytes (as opposed to 2928 bytes without the StreamInterface).
You are right, one does not need a virtual destructor if one does not delete via a base class. Trouble is it's this kind of sneaky C++ thing that is waiting there to catch a billion programmers like me who haven't worked out the rule yet of just forget and make a mistake.
Luckily with -Wall -Werror and new C++11 features the compiler has become a lot better at stopping us from making such mistakes. For example replacing the pointers in my example with shared_ptr forces one to get rid of the delete and object life time is taken care of nicely, the virtual destructor is not even required any more as far as I can tell.
Clang makes C++ even more user friendly.
I do agree that interfaces are not / should not be classes. In an abstract way I can have ThingA and ThingB that just happen to have that same knobs and dials but in no other way related. In C++ we end up having to forcibly relate them by inheriting from a base class.
I'm curious as to why you have C++11 features in you "do not use list". Which parts are dangerous?
I have only been looking at C++11 features recently and it seems to have made great strides in making C++ programming easier and less error prone. I'm thinking, "auto", "override", range for loops, shared_ptr, lambdas, move semantics...After a few days using that stuff I could not go back to the old C++ style, it's so much nicer.
Also why the "never use stl"? Isn't it better to use standardized containers and algorithms rather than inventing your own? What's the catch? Perhaps I have not used it enough to find out yet.
Well, except of course sheer size becomes the problem for embedded work as we are discussing here.
I think that's the long and short of it. Try to create a vector in one of your projects and see what happens. Also STL containers and heap storage go hand-in-hand. Since heap is to be avoided in these tiny run-time memory environments that would also seem to be a strike against STL containers.
That's interesting, You have given me something to consider there. I make interfaces that are intended to be used the same way, a concrete object is created and the interface is used only to turn the knobs, but I think some of my classes may only have the pointer to the interface base class but are still expected to mop up the concrete object when they are destroyed as they contain it. I'll need to ponder on this a bit.
The primary reason I don't utilize C++11 stuff yet is that it's not well supported across compilers/platforms yet. Being in the games industry means platforms includes consoles. This is actually one of the main reasons I don't use STL, also. Believe it or don't, but STL is not fully compatible across different compilers (even GNU to VC++, let alone to consoles). The other big reason for no STL is that I think the design/interface is terrible, and they make using your own allocators extremely painful (while having extremely BAD allocation patterns by default). In general, STL perf is terrible unless you wrangle the allocators. Not to mention that it's uniform interface makes it easy for people to abuse in horrible perf killing ways (like linear iterating containers that are slow at it, or random accessing ones that are slow at that). Then there's the bloat and how it makes debugging painful... and I won't even go into the horror that is boost.
Also, regarding interfaces, my general usage pattern for them is for APIs to things that don't get created and destroyed much. Usually once at startup and shutdown. For objects that have transient lifetimes, I'm often using the refcounted base and/or abstract base classes that are not pure interfaces. The other way I use interfaces is to "add on" to a class and/or to allow decoupling. So I have an object, and I want some outside function or library to be able to interact with it without needing to know about the whole object derivation. So the interface is not a base in the sense that it's valid to think of the pointer as a reference to the object. For example, I have a Serializable interface that only has two pure methods implemented (Serialize and Unserialize), and I'll have every object I want to be serializable include that interface. Then I can pass those objects to a function that can work with Serializable objects.That function doesn't create or destroy anything, it just uses the serializable interface. In any case, if I have an "interface" that I want to actually be able to be a base deletable object pointer then it stops being a pure interface and becomes an abstract base class with appropriate virtual destructor and so on.
main.cpp
streaminterface.h
serial.h
The usefulness of CRTP occurs when you templatize the functions that use the patterns. Then, as long as you know at compile time in just one location which version of the base class you want to use the rest of it just falls into place.
It doesn't look that nice, but it's efficient and relatively clean for the end user. There's little or no overhead from what I can tell.
Edit: Why does posting from Chrome rip the CAPITALIZATION and new lines out?
I don't have that issue with Chrome removing capitalization or new lines.
CRTP is "Curiously Recurring Template Pattern" that I posed in post #52 as an alternative to pure virtual interfaces. I didn't name it...
The Chrome thing is driving me nuts, it's almost impossible to post to this forum with Chrome on this Debian box. Even hitting the "post reply" button hangs forever. Oddly on another Debian + Chrome machine in my office it works fine. There must be some weird config difference somewhere.
That Mobius Template thing (CRTP) was about creating interfaces without virtual methods. Here is another way to do it, It does not even us inheritance. Seems this is faster than using inheritance but eats more memory.
As an exercise this can be done without C++ lambdas, use functor classes instead.