If you are already a Spin programmer you can produce C++ code by using the spin2cpp converter -- http://forums.parallax.com/showthread.php?141233-New-version-of-spin-to-C-translator . You can compare the Spin code to the C++ code to see how things are done in C++. Some of the C++ code may look a little strange because the spin2cpp converter has to handle cases where Spin code accesses variables on the stack like an array. Also, the generated C++ code doesn't use the intrinsic C++ functions for doing console and file I/O, which would make the C++ code a lot cleaner. However, this might help you learn how to program in C++.
Fascinating article. I'm certainly not going to argue with Denise Ritchie since he would certainly have known. Many years ago I worked at Lucent and was talking to a very old timer that told me that C was developed to ease the writing of switch code and I had no reason not to have believed him since I know he worked for Bell Labs when Unix and C were developed. I stand corrected.
One thing I do know is that at least until 2000 the phone switches made by Lucent used Unix as the OS. For that matter the cell site switches made by Motorola ran Unix as well with the bulk of the development being done on SGIs. That may be where the now urban legend got started.
Very interesting discussion here, please keep going. Basically I was wondering and pondering the same as Loopy but not near as informed as he and every one else here. Please keep educating me I know enough C to be dangerous!
I am happy to hear there are others that have a comfort zone that wants to learn C, but are daunted by C++.
One of my biggest frustrations with newer computer languages is size, but another one is that they seem to be offering the latest updates before I even get started.
So taking a view of C from a historical perspective is quite helpful.
I mentioned this article before, but since it is short and remains useful - I will mention it again.
Programming in C:A Tutorial by Brian W. Kernighan, circa 1974.
It is certainly NOT up-to-date, but Mr. Kernighan contributed a lot to C's early development and it an excellent writer for beginners. It is also written at a time when 'data processing' did not have so many topics to discuss - the Internet was not universal, video processing was yet to come, and even big computers were often 64K of RAM.
There really isn't much to learning 1974 C. Mostly it is about learning to recognize the languages components - primarily what is a function, what are arguments, and what are statements.
About the only 'big' difference' from Basic is that C realizes that knowing the memory addresses of stored data is quite important and so pointers are introduced. But one can program up to a point without using pointers.
There is NOT a lot to remember. Branching can be done with IF and ELSE and the support of FOR or WHILE statements. The SWITCH statement is very similar to PBasic's SELECT. All the i/o and goodies sit in libraries and these can be studied individual as need arises.
After all that is understood, moving on to The C Programming Language, 2nd ed (ANSI C) by Kernighan and Ritchie, is a lot easier.
Simply put, I prefer to warm up to a big study project with the shortest good overview I can get. Trying to penetrate 700 or more pages of dense technical material without a good overview is difficult at best.
BTW, I am beginning to migrate Spin programs to C just to see what they look like. The spin2cpp is a very handy tool. Also, you have a choice between Catalina C and GCC C
About the only C++ addition I use regularly is the // for single line comments.
// it's just so handy
// and I really prefer
// this to making long
// multiline comments
// using /* */ :-)
// and it's handy for
// commenting out
// single lines of code.
It's funny, there's another thread where someone discusses how much they dislike the addition of the // for comments. Just goes to show you no matter what you do, someone will like it and someone won't like it...so, do what YOU like!!
Holly,
C++ features can be very useful for organising your code and don't need to make it any bigger or slower than doing the same thing in C.
All those millions of Arduino users get along with C++ just fine. Note that the Arduino guys carefully never call it C++ and only document its simple features.
I have been slowly grasping the core reason for OOP and C++, the objects are reusable code.
Once you have objects, you no longer have to start from scratch and question everything.
The problem is that many OOP texts forget that this is the whole point of objects and just jump into characterizing the benefits of the given OOP by discussing inheritance, abstraction, garbage collection, and other features. All these things are nice.... if you really need them.
But microcontroller projects tend to be rather small and takes a different kind of understanding to appreciate a big ambitious language
You are absolutely right, OOP is based on code reuse but it is also rooted in extensibility and interfaces. A good example are all the flavors of FullDuplexSerial. Let's say FDS were an abstract base class. All derived classes benefit from the tested stability of FDS while maintaining the ability to extend FDS's implementation. For example, extend the Rx/Tx buffers to 32 bytes without modifying the base FDS object.
From a developer's perspective the derived FDS classes looks exactly like the base FDS only the buffer is larger. Now the the derived FDS class becomes a drop in replacement for FDS. This code organization concept is extremely powerful. There is a catch though. While the concept is easy to discuss the implementation takes practice with many mistakes along the way.
The bugaboo is 'abstraction'. Just your term FDS in lieu of Full Duplex Serial is enough of an abstraction to confound some learners. When looking at all these various objects with are libraries ofcumentation, the implicit terminology becomes rather vast. The adept user can navigate all this, but the new learner is lost as to where something came from and how it fits into the whole.
I have similar problems with teaching English as a Second Language and so I am deeply fascinated by computer languages and why some people can pick up so many whilst others can't handle but a few. I feel that I've been mostly in the slow lane until recent. Many be that will help other slow learners.
Abstraction is suppose to be a good thing, but it has a downside of making everything seem like a black box.
Laughed out loud at Holly's clever post. I also second everything she said.
My work deals entirely with embedded systems, and every new project inevitably has the same discussion about picking a programming language. My invariable response is a mild, "I see no reason to look beyond C." So far my advice has proven to be brilliant.
OOP concepts are relatively easy to explain and understand but design and implementation take practice - like anything else. The cool thing about C++ and other OOP languages is you can decide if you want to take advantage of the OOP concepts. That's all we're discussing - concepts.
Abstraction is suppose to be a good thing, but it has a downside of making everything seem like a black box.
This depends on your point of view. Let's take a different example, an abstract stream class. Imagine if FDS, SD Card, or any serial type object all implement the same base stream object. Once the novice user masters FDS, the SD Card object has less of a learning curve because it uses the same IO concepts as FDS.
Still havent delved into th C++ but kinda like what I am reading here. My buddy stopped by the other day, he is basically an assembly lahguage user. He was not too impressed with C at all. He thinks in binary I believe so assembly works for him but is black magic to me!
I tend to agree with you. Having had to used many languages in embedded systems from PL/M to Coral to C to Pascal to Ada and a few I have forgotten it seems they all offer much the same and there is no reason to switch away from the most used, which happens to be C.
I believe C++ features can be used to advantage even in small systems, but it has grown into such a huge contorted language I don't think any single normal human being can live long enough to fathom the repurcussions of all its features.
Only recently have I found a language that offers significant advantages over C in the small embeded space and that is XC for the XMOS devices. That provides support for parallelism and inter-process communication in the syntax of the language iteself. So for example you can run your threads on one processor or many but your code stays the same. It also has a good grip on time and determinism. Of course it is derived from C:)
Personally, I love being close to the hardware and being able to have some idea of how the actual bits and bytes or longs are being marched through the processor. But I also like having the convenience of being able to throw together a good application in a minimum amount of time if creativity or the marketplace demands that I do so.
The author of C++ has written a text, The Design and Evolution of C++ that historically documents and justifies the migration from C to C++. I may just order a copy to get a good idea of what their vision was and what their justifications for the changes were.
Assembler is quite good for being close to the machine. But for study of computation we often is 'pseudo-code' examples and though it really is not C, it looks very close to C - enough so that understanding C will likely build confidence in studying algorithms in pseudo-code.
If you are having difficulties in sorting out C and C++, just Google "A History of C++" and you will get a nice PDF that will guide the way.
Personally, I tend to suspect that as one makes a lanugage use of any sort more abstract, one tends to idealize more and actually becomes less in contact with reality. And when I say 'language', I am including real lanaguages - like English and Chinese, not just computer languages.
I suspect C is simple and direct and demands more discipline and more effort than an OOP.
And here is a rather eye-opening tidbit from Linux Torvalds himself...
I read the thread and agree with the response that you are doing OOP anytime you use structures coupled with functions that operate on those structures. Eventually you'll find a need for polymorphism and reinvent C++'s vtable. I haven't read the Linux kernel source, but people who have noticed that Linux does contain such an OOP design pattern (e. g. file_lock_operations).
Most of Linus's criticism are really about the class libraries layered on top of C++, not the core language itself. Frankly he has a point that all that abstraction can lead to inefficiencies which are really hard to find. He's also a bit confused that class organization problems are all that different from structure organization.
Programming is really all about managing complexity. Low level programs generally have simpler problems to deal with, but have strict performance and memory constraints. Application programs often have highly complex problems, but far more memory and performance leeway. For example C's text processing support is pretty feeble, while PERL's is fairly rich. While either language can process text, odds are the PERL programmer will produce a more reliable solution faster. But it would be insane to try to write device drivers in PERL
So using the right tools for the right task goes a long way to preserving sanity.
Linux uses ioctl and Kernel Loadable Module device drivers to provide polymorphism. C++ can be reinvented in C if necessary, no problem, but it doesn't solve C's orgranizational problems. SPIN and C++ offer much better topographic organization ability than C.
Personally, I tend to suspect that as one makes a lanugage use of any sort more abstract, one tends to idealize more and actually becomes less in contact with reality. And when I say 'language', I am including real languages - like English and Chinese, not just computer languages.
Loopy, abstract in C++ means derived classed must implement certain functionality of the base class - abstract methods. An abstract method declaration is a contract that states - if you derive from me (base class) you must implement (contract) this functionality (method).
Heater and jazzed both mentioned organization. Deriving from an abstract base class organizes like things which we end up calling types.
Personally, I tend to suspect that as one makes a lanugage use of any sort more abstract, one tends to idealize more and actually becomes less in contact with reality. And when I say 'language', I am including real lanaguages - like English and Chinese, not just computer languages.
I suspect C is simple and direct and demands more discipline and more effort than an OOP.
Thanks for posting that link - I had not seen it before. If that's a genuine quote, then my respect for Linus Torvalds just went up another notch. Colorful language aside, he is quite correct - OOP languages are unsuitable for most low-level or embedded purposes, and if an OOP language is desirable (presumably for some reason other than code size or performance), then C++ is a particularly poor OOP choice - partly because people assume it is simply a "better" version of C (Everyone knows how good C is, so surely C++ must be better???). But as Linus points out, using the C++ extensions to C is a slippery slope that generally leads to poor outcomes.
C++ is often promoted by people who think C needs "fixing" - but the real problem may simply be that C was the wrong language to choose in the first place. There are dozens of languages that are more suitable for many types of task than C - but for low-level or embedded development, C is just about as good as it gets.
But I'm a realist - in another 30 or so years this may no longer be the case .
I have been starting to question the idea that languages like Java, C#, Objective C, Javascript are actually successors to C in the same way that C++ is.
They all obviously have a similar look to C, what with the curly brackets for blocks, square brackets for arrays similar looking "if", "for", "while" constructs, and so on. But that is just a superficial syntactical similarity.
When it comes down to what all those syntactical structures mean (the semantics) then they are very far removed from C.
Java for example is totally class based, you cannot have a function standing on it own outside a class. It's types are very different, as you will see it you want to do unsigned 32 bit arithmetic. You have no access to real memory locations. Not to mention all that garbage collection baggage.
We can say similar things about C#. Objective C I have no idea about but I understand it may be more of a "branch" of C than the others.
Javascript is way out there, what with not really having types, it's lexical scoping rules are very different and so on.
As a, maybe extreme, example of what I mean about syntactic similarity but semantically being a totally different language consider the following Javascript:
function someFunc(parA, parB)
{
var localA = 1; // A C programmer would guess these
var localB = 2; // disappear when the function returns.
function innerFunc() // For a C programmer this is not allowed
{ // he would guess if it was a pointer
// to it would be a bad idea.
var innerA = 3; // A C programmer would guess these disappear
var innerB = 4; // when the function returns.
return((parA + localA + innerA) * (parB + localB + innerB));
}
return (innerFunc) // Return reference to function.
}
console.log("Hello");
f = someFunc(5, 6); // Get refernces to an inner function
g = someFunc(7, 8);
var result = f(); // Call an inner function
// (What from outside someFunc, crazy!).
console.log (result);
f = null; // Killing the reference cause all that local
// stuff to be garbage collected.
result = g(); // Call the inner func again, note how it has
console.log (result); // it's own, different, values of
// its local parameters.
console.log("Bye");
This looks somewhat C like but to a C programmer it is a very strange world.
1) We have a function defined within the scope of an other function, C does not allow this (although I believe GCC has a non standard extension for it.
2) A C programmer would expect the local variables localA and localB to be out of scope and disappear after the function has returned. Same for the functions parameters parA and parB. Same for innerA and innerB that are defined for the nested (inner function).
3) And what is this, it someFunc returns a reference to it's inner function! Normally inner functions are not available to anyone outside the containing function. Surely that is going to go horribly wrong?
But no, in JS as long as someone has a reference to the innerFunction (the main line variables f and g in this case), they can always call it safely. All those local looking parameters and variables remain in existence until there are no more references to innerFunc
Note how f and g when called produce different results, that's because there are now two copies of the local variables (the parameters parA, parB in this case.
All this is wildly different from C (Look up "closures"). In fact we find this is the basis of doing object oriented programming in JS, although it's hard to see initially, and even a function is not just a function but an object.
Edit: Turns out you can write very similar looking code to the above with inner functions in C and use the GCC non-standard extension. However it will work very differently, i.e. not work. Just found this nice quote from the GCC documentation about it:
"If you try to call the nested function through its address after the containing function has exited, all hell will break loose."
So all I'm saying is let's stop thinking of languages being derivatives or similar to each other just because the syntax looks a bit the same. There are far deeper things going on here.
Just to wind up my ramblings on language derivatives and syntax/semantics here is the C version of the Javascript nested function example above. Superficially, to the untrained eye, it looks very similar. It's behaviour is very different, in fact although it compiles without error or warning it produces one wrong result and then crashes:). See the comments within.
//
// WARNING!! This code cannot work as it uses out of scope local variable.
//
// A little experiment with nested functions in C.
// See bizzare.js for similar looking code that behaves very differently.
//
#include <stdio.h>
void* someFunc(double parA, double parB)
{
double localA = 1; // A C programmer would guess (correctly) these
double localB = 2; // disappear when the function returns.
double innerFunc() // For a C programmer this is not allowed
{ // he would guess (correctly) that calling it via a pointer
// would be a bad idea.
double innerA = 3; // A C programmer would guess (correctly) these disappear
double innerB = 4; // when the function returns.
return((parA + localA + innerA) * (parB + localB + innerB));
}
return (innerFunc); // Return pointer to the inner function.
}
int main(int argc, char* argv[])
{
printf("Hello\n");
double (*f)(); // Declare pointers to functions returning double
double (*g)();
f = someFunc(5, 6); // Get pointers to inner function
g = someFunc(7, 8); // Error: This actually overwrites the previous
// calls params (5, 6) with (7, 6) !!!
double result = f(); // Call an inner function
// (What from outside someFunc, crazy!).
printf("%f\n", result); // Error: Wrong result due to error above !!!
f = 0; // Killing the reference cause all that local
// stuff to be garbage collected in JS not here in C
result = g(); // Error: Crashes !!
// Call the inner func again, note how it has
// it's own, different, values of
// its local parameters in JS, not in C.
printf("%f\n", result);
printf("Bye\n");
return(0);
}
So we see that saying a language is "C like" or a derivative of C is rather like saying a Panda is a bear just because it sort of looks like one.
Tor,
I have to try that out but I suspect it still does not work as I think static will create a single instance of the variables. The code presented requires many instances. And how do I make the parameters to the functions static? Oh and in these examples the local variables are basically constants but in general they need not be.
Actually I was just now wondering how you would do this whole "closure" idea in C.
Some of the worst object code I have ever seen was compiled from C++.
This was an embedded device that supported an end-user programmable dialect of BASIC. I complained for years that it was kind of slow, but the manufacturer didn't take my complaint seriously. The typical conversation went like this:
Them: It's a 20 MHz 16-bit CPU, what do you want?
Me: I want it to be faster than the 4 MHz 8080 I learned on, and it's not.
Then, after six or seven years I twisted their arm and got them to insert a backdoor so I could at least run my own machine code and build up my own faster development system. This included some pointers to things in the firmware, which made a bit of hacking possible, and I went looking for the BASIC engine.
This device worked like Tiny Basic in that, while there were no line numbers, the entire source code of your program -- comments and all -- was loaded into the device's flash. Since reverse assembling a Tiny Basic dialect was the project that taught me to program, I am pretty conversant with Basic inner interpreters; I've written six or seven of them for my own use over the years. So I easily found the keyword list, and then found all the places the base addresses were referenced and started tracing calls.
Now, while it's slower without tokenization there are several pretty decent approaches to telling whether that P you are pointing at is the start of the word PRINT, PUT, or POKE. You scan the table (bonus points if you can use a binary instead of linear search) for the P's, then start checking the second character, then the third, until you reach the end of an entry and find an address to jump to (or an index to such an address in another table). Simple.
Well this dog wasn't simple. P, is it? It would push the P and a lot of status information onto the stack and then RECURSIVELY CALL ITSELF. As I traced out its method for ID'ing PRINT I figured it must have taken 40,000 clock cycles to do what should have taken no more than 100. I could not figure out what mental process went into what I took to calling the BOGOPARSE. Finally, on another forum, someone clued me in that the name of that routine was almost certainly LEX, and it was really meant for use in compilers. (Later still, when I described thiis to our own Bill Henning, he told me the title and author of the book the guy had learned compiler design from.)
So how did this get into production? Well, I had to visit the factory to help them put my back door into their next-generation product, and looking over the engineer's shoulder I saw that what should have been the BASIC inner interpreter looked gloriously simple and clean. It was only when you drilled into the classes that were used that you found out how obnoxiously inefficient they were (or, as he later did, you use a profiler to find out your system is spending 40% of its time doing something that should be nearly instantaneous).
This sort of thing does not happen to people who use C, Forth, Assembly, or other languages that aren't designed around the idea of hiding code and references as if that's a virtue. It might be a virtue in some abstract design sense but it's a nightmare in debugging and profiling.
P.S. another thing that compiler liked to do was load the length of a structure, MUL by the length, add to the base address, add an offset within the structure, fetch a value, push the value, THEN load the length of the SAME structure, MUL by the SAME length (on a 80186 where MUL can take 100 clock cycles), add the SAME base, add a DIFFERENT offset within the structure and fetch and push a different value. In some cases it did this 10 times in a row. When I alerted them to this bizarre behavior they found that there was a compiler switch that made it optimize that better, but until I came along and peeked under the hood nobody knew there was any reason to even research that.
Tor,
I have to try that out but I suspect it still does not work as I think static will create a single instance of the variables. The code presented requires many instances. And how do I make the parameters to the functions static? Oh and in these examples the local variables are basically constants but in general they need not be.
Actually I was just now wondering how you would do this whole "closure" idea in C.
A casual look indicates that the problematic part would be the innerA and innerB variables, which if declared 'static' will only exist in one version. In any case the (gcc-extension) inner function will not survive returning from the outer function (I believe - this is one gcc extension I never use. In any case I believe it's meant to be used the way Pascal inner functions are used - called by the 'outer' function, but hidden from other functions. I suspect they're not supposed to be called by extracting a function pointer and then called from elsewhere.)
I am SO delighted that someone with Torvald's credentials said what he said about C and C++. (I'm stowing his quote in my briefcase, to pull out at opportune times.)
It is worth noting that the use of abstraction has had miraculous success, such as in chip design. But it may be too much rope in other cases. In the natural world, survival of the fittest takes care of such misapplications.
No, the whole caboodle is the problem, parA, parB, localA, localB, innerA innerB, they all don't exist when someFunc() returns in C. After all they were only ever on the stack.
Yes, I am sure that the idea of inner functions, as exist in Pascal and as far back as Algol, is that they are only ever used in the containing function and sub containing functions. That has some benefit in structured code in that you have less parameters to pass to the inner function and it "hides" the inner function from other places in you module. That is is a name space issue, for example in a single source code file you could have many inner functions called "compare", or whatever, but as they are all hidden in their containing functions there is no clash of names.
As such I have always thought this is nice but not necessary, seems the C standards guys agree and have not included inner functions in any C standard so far.
The fact that the GCC extention can return a pointer to such an inner fuction seems totally stupid and a good reason not to allow inner functions in C.
Now, inner functions in Javascript and other languages that support closures behave totally differently and become very usefull.
We have seen how C can support object oriented programming like C++. Afterall early C++ compilers were just preprocessors to C but the challenge here is how program in C the behavior of closures as seen in Javascript?
Comments
I think it is important to point out that this is not true.
Whilst C was developed on machines with very limited resources it was all to do withe getting Unix off the ground. Anything but an embedded system.
See history here : http://cm.bell-labs.com/who/dmr/chist.html
Fascinating article. I'm certainly not going to argue with Denise Ritchie since he would certainly have known. Many years ago I worked at Lucent and was talking to a very old timer that told me that C was developed to ease the writing of switch code and I had no reason not to have believed him since I know he worked for Bell Labs when Unix and C were developed. I stand corrected.
One thing I do know is that at least until 2000 the phone switches made by Lucent used Unix as the OS. For that matter the cell site switches made by Motorola ran Unix as well with the bulk of the development being done on SGIs. That may be where the now urban legend got started.
http://forums.parallax.com/showthread.php?141982-automatically-convert-Spin-to-C-or-C&p=1120651#post1120651
Thanks
One of my biggest frustrations with newer computer languages is size, but another one is that they seem to be offering the latest updates before I even get started.
So taking a view of C from a historical perspective is quite helpful.
I mentioned this article before, but since it is short and remains useful - I will mention it again.
Programming in C:A Tutorial by Brian W. Kernighan, circa 1974.
It is certainly NOT up-to-date, but Mr. Kernighan contributed a lot to C's early development and it an excellent writer for beginners. It is also written at a time when 'data processing' did not have so many topics to discuss - the Internet was not universal, video processing was yet to come, and even big computers were often 64K of RAM.
There really isn't much to learning 1974 C. Mostly it is about learning to recognize the languages components - primarily what is a function, what are arguments, and what are statements.
About the only 'big' difference' from Basic is that C realizes that knowing the memory addresses of stored data is quite important and so pointers are introduced. But one can program up to a point without using pointers.
There is NOT a lot to remember. Branching can be done with IF and ELSE and the support of FOR or WHILE statements. The SWITCH statement is very similar to PBasic's SELECT. All the i/o and goodies sit in libraries and these can be studied individual as need arises.
After all that is understood, moving on to The C Programming Language, 2nd ed (ANSI C) by Kernighan and Ritchie, is a lot easier.
Simply put, I prefer to warm up to a big study project with the shortest good overview I can get. Trying to penetrate 700 or more pages of dense technical material without a good overview is difficult at best.
BTW, I am beginning to migrate Spin programs to C just to see what they look like. The spin2cpp is a very handy tool. Also, you have a choice between Catalina C and GCC C
// it's just so handy
// and I really prefer
// this to making long
// multiline comments
// using /* */ :-)
// and it's handy for
// commenting out
// single lines of code.
uC programmers have little use for C++ IMO.
C++ features can be very useful for organising your code and don't need to make it any bigger or slower than doing the same thing in C.
All those millions of Arduino users get along with C++ just fine. Note that the Arduino guys carefully never call it C++ and only document its simple features.
Once you have objects, you no longer have to start from scratch and question everything.
The problem is that many OOP texts forget that this is the whole point of objects and just jump into characterizing the benefits of the given OOP by discussing inheritance, abstraction, garbage collection, and other features. All these things are nice.... if you really need them.
But microcontroller projects tend to be rather small and takes a different kind of understanding to appreciate a big ambitious language
From a developer's perspective the derived FDS classes looks exactly like the base FDS only the buffer is larger. Now the the derived FDS class becomes a drop in replacement for FDS. This code organization concept is extremely powerful. There is a catch though. While the concept is easy to discuss the implementation takes practice with many mistakes along the way.
I have similar problems with teaching English as a Second Language and so I am deeply fascinated by computer languages and why some people can pick up so many whilst others can't handle but a few. I feel that I've been mostly in the slow lane until recent. Many be that will help other slow learners.
Abstraction is suppose to be a good thing, but it has a downside of making everything seem like a black box.
My work deals entirely with embedded systems, and every new project inevitably has the same discussion about picking a programming language. My invariable response is a mild, "I see no reason to look beyond C." So far my advice has proven to be brilliant.
This depends on your point of view. Let's take a different example, an abstract stream class. Imagine if FDS, SD Card, or any serial type object all implement the same base stream object. Once the novice user masters FDS, the SD Card object has less of a learning curve because it uses the same IO concepts as FDS.
I tend to agree with you. Having had to used many languages in embedded systems from PL/M to Coral to C to Pascal to Ada and a few I have forgotten it seems they all offer much the same and there is no reason to switch away from the most used, which happens to be C.
I believe C++ features can be used to advantage even in small systems, but it has grown into such a huge contorted language I don't think any single normal human being can live long enough to fathom the repurcussions of all its features.
Only recently have I found a language that offers significant advantages over C in the small embeded space and that is XC for the XMOS devices. That provides support for parallelism and inter-process communication in the syntax of the language iteself. So for example you can run your threads on one processor or many but your code stays the same. It also has a good grip on time and determinism. Of course it is derived from C:)
The author of C++ has written a text, The Design and Evolution of C++ that historically documents and justifies the migration from C to C++. I may just order a copy to get a good idea of what their vision was and what their justifications for the changes were.
Assembler is quite good for being close to the machine. But for study of computation we often is 'pseudo-code' examples and though it really is not C, it looks very close to C - enough so that understanding C will likely build confidence in studying algorithms in pseudo-code.
If you are having difficulties in sorting out C and C++, just Google "A History of C++" and you will get a nice PDF that will guide the way.
http://lwn.net/Articles/249460/
Personally, I tend to suspect that as one makes a lanugage use of any sort more abstract, one tends to idealize more and actually becomes less in contact with reality. And when I say 'language', I am including real lanaguages - like English and Chinese, not just computer languages.
I suspect C is simple and direct and demands more discipline and more effort than an OOP.
I read the thread and agree with the response that you are doing OOP anytime you use structures coupled with functions that operate on those structures. Eventually you'll find a need for polymorphism and reinvent C++'s vtable. I haven't read the Linux kernel source, but people who have noticed that Linux does contain such an OOP design pattern (e. g. file_lock_operations).
Most of Linus's criticism are really about the class libraries layered on top of C++, not the core language itself. Frankly he has a point that all that abstraction can lead to inefficiencies which are really hard to find. He's also a bit confused that class organization problems are all that different from structure organization.
Programming is really all about managing complexity. Low level programs generally have simpler problems to deal with, but have strict performance and memory constraints. Application programs often have highly complex problems, but far more memory and performance leeway. For example C's text processing support is pretty feeble, while PERL's is fairly rich. While either language can process text, odds are the PERL programmer will produce a more reliable solution faster. But it would be insane to try to write device drivers in PERL
So using the right tools for the right task goes a long way to preserving sanity.
Heater and jazzed both mentioned organization. Deriving from an abstract base class organizes like things which we end up calling types.
Thanks for posting that link - I had not seen it before. If that's a genuine quote, then my respect for Linus Torvalds just went up another notch. Colorful language aside, he is quite correct - OOP languages are unsuitable for most low-level or embedded purposes, and if an OOP language is desirable (presumably for some reason other than code size or performance), then C++ is a particularly poor OOP choice - partly because people assume it is simply a "better" version of C (Everyone knows how good C is, so surely C++ must be better???). But as Linus points out, using the C++ extensions to C is a slippery slope that generally leads to poor outcomes.
C++ is often promoted by people who think C needs "fixing" - but the real problem may simply be that C was the wrong language to choose in the first place. There are dozens of languages that are more suitable for many types of task than C - but for low-level or embedded development, C is just about as good as it gets.
But I'm a realist - in another 30 or so years this may no longer be the case .
Ross.
They all obviously have a similar look to C, what with the curly brackets for blocks, square brackets for arrays similar looking "if", "for", "while" constructs, and so on. But that is just a superficial syntactical similarity.
When it comes down to what all those syntactical structures mean (the semantics) then they are very far removed from C.
Java for example is totally class based, you cannot have a function standing on it own outside a class. It's types are very different, as you will see it you want to do unsigned 32 bit arithmetic. You have no access to real memory locations. Not to mention all that garbage collection baggage.
We can say similar things about C#. Objective C I have no idea about but I understand it may be more of a "branch" of C than the others.
Javascript is way out there, what with not really having types, it's lexical scoping rules are very different and so on.
As a, maybe extreme, example of what I mean about syntactic similarity but semantically being a totally different language consider the following Javascript:
This looks somewhat C like but to a C programmer it is a very strange world.
1) We have a function defined within the scope of an other function, C does not allow this (although I believe GCC has a non standard extension for it.
2) A C programmer would expect the local variables localA and localB to be out of scope and disappear after the function has returned. Same for the functions parameters parA and parB. Same for innerA and innerB that are defined for the nested (inner function).
3) And what is this, it someFunc returns a reference to it's inner function! Normally inner functions are not available to anyone outside the containing function. Surely that is going to go horribly wrong?
But no, in JS as long as someone has a reference to the innerFunction (the main line variables f and g in this case), they can always call it safely. All those local looking parameters and variables remain in existence until there are no more references to innerFunc
Note how f and g when called produce different results, that's because there are now two copies of the local variables (the parameters parA, parB in this case.
All this is wildly different from C (Look up "closures"). In fact we find this is the basis of doing object oriented programming in JS, although it's hard to see initially, and even a function is not just a function but an object.
Edit: Turns out you can write very similar looking code to the above with inner functions in C and use the GCC non-standard extension. However it will work very differently, i.e. not work. Just found this nice quote from the GCC documentation about it:
"If you try to call the nested function through its address after the containing function has exited, all hell will break loose."
So all I'm saying is let's stop thinking of languages being derivatives or similar to each other just because the syntax looks a bit the same. There are far deeper things going on here.
So we see that saying a language is "C like" or a derivative of C is rather like saying a Panda is a bear just because it sort of looks like one.
-Tor
I have to try that out but I suspect it still does not work as I think static will create a single instance of the variables. The code presented requires many instances. And how do I make the parameters to the functions static? Oh and in these examples the local variables are basically constants but in general they need not be.
Actually I was just now wondering how you would do this whole "closure" idea in C.
This was an embedded device that supported an end-user programmable dialect of BASIC. I complained for years that it was kind of slow, but the manufacturer didn't take my complaint seriously. The typical conversation went like this:
Them: It's a 20 MHz 16-bit CPU, what do you want?
Me: I want it to be faster than the 4 MHz 8080 I learned on, and it's not.
Then, after six or seven years I twisted their arm and got them to insert a backdoor so I could at least run my own machine code and build up my own faster development system. This included some pointers to things in the firmware, which made a bit of hacking possible, and I went looking for the BASIC engine.
This device worked like Tiny Basic in that, while there were no line numbers, the entire source code of your program -- comments and all -- was loaded into the device's flash. Since reverse assembling a Tiny Basic dialect was the project that taught me to program, I am pretty conversant with Basic inner interpreters; I've written six or seven of them for my own use over the years. So I easily found the keyword list, and then found all the places the base addresses were referenced and started tracing calls.
Now, while it's slower without tokenization there are several pretty decent approaches to telling whether that P you are pointing at is the start of the word PRINT, PUT, or POKE. You scan the table (bonus points if you can use a binary instead of linear search) for the P's, then start checking the second character, then the third, until you reach the end of an entry and find an address to jump to (or an index to such an address in another table). Simple.
Well this dog wasn't simple. P, is it? It would push the P and a lot of status information onto the stack and then RECURSIVELY CALL ITSELF. As I traced out its method for ID'ing PRINT I figured it must have taken 40,000 clock cycles to do what should have taken no more than 100. I could not figure out what mental process went into what I took to calling the BOGOPARSE. Finally, on another forum, someone clued me in that the name of that routine was almost certainly LEX, and it was really meant for use in compilers. (Later still, when I described thiis to our own Bill Henning, he told me the title and author of the book the guy had learned compiler design from.)
So how did this get into production? Well, I had to visit the factory to help them put my back door into their next-generation product, and looking over the engineer's shoulder I saw that what should have been the BASIC inner interpreter looked gloriously simple and clean. It was only when you drilled into the classes that were used that you found out how obnoxiously inefficient they were (or, as he later did, you use a profiler to find out your system is spending 40% of its time doing something that should be nearly instantaneous).
This sort of thing does not happen to people who use C, Forth, Assembly, or other languages that aren't designed around the idea of hiding code and references as if that's a virtue. It might be a virtue in some abstract design sense but it's a nightmare in debugging and profiling.
P.S. another thing that compiler liked to do was load the length of a structure, MUL by the length, add to the base address, add an offset within the structure, fetch a value, push the value, THEN load the length of the SAME structure, MUL by the SAME length (on a 80186 where MUL can take 100 clock cycles), add the SAME base, add a DIFFERENT offset within the structure and fetch and push a different value. In some cases it did this 10 times in a row. When I alerted them to this bizarre behavior they found that there was a compiler switch that made it optimize that better, but until I came along and peeked under the hood nobody knew there was any reason to even research that.
-Tor
It is worth noting that the use of abstraction has had miraculous success, such as in chip design. But it may be too much rope in other cases. In the natural world, survival of the fittest takes care of such misapplications.
No, the whole caboodle is the problem, parA, parB, localA, localB, innerA innerB, they all don't exist when someFunc() returns in C. After all they were only ever on the stack.
Yes, I am sure that the idea of inner functions, as exist in Pascal and as far back as Algol, is that they are only ever used in the containing function and sub containing functions. That has some benefit in structured code in that you have less parameters to pass to the inner function and it "hides" the inner function from other places in you module. That is is a name space issue, for example in a single source code file you could have many inner functions called "compare", or whatever, but as they are all hidden in their containing functions there is no clash of names.
As such I have always thought this is nice but not necessary, seems the C standards guys agree and have not included inner functions in any C standard so far.
The fact that the GCC extention can return a pointer to such an inner fuction seems totally stupid and a good reason not to allow inner functions in C.
Now, inner functions in Javascript and other languages that support closures behave totally differently and become very usefull.
We have seen how C can support object oriented programming like C++. Afterall early C++ compilers were just preprocessors to C but the challenge here is how program in C the behavior of closures as seen in Javascript?