The world is still powered by C
potatohead
Posts: 10,261
http://tekhinnovation.blogspot.com/2017/03/after-all-these-years-world-is-still.html?m=1
Nice overview piece on the enduring relevance of C.
Nice overview piece on the enduring relevance of C.
Comments
... from the link above ...
"Embedded Systems
Imagine that you wake up one day and go shopping. The alarm clock that wakes you up is likely programmed in C. Then you use your microwave or coffee maker to make your breakfast. They are also embedded systems and therefore are probably programmed in C. You turn on your TV or radio while you eat your breakfast. Those are also embedded systems, powered by C. When you open your garage door with the remote control you are also using an embedded system that is most likely programmed in C."
Maybe just serves as a container at the extremes, but there is value in that.
I think we can also say the world is powered by assembly language too. The core bits, bootstrapping, low level hardware access, are expressed as ASM. Might be in a C container, for a ton of cases, but no matter.
With type casting you can override its strong type, it's for your own good that it will only allow it if you know what your doing.
So you can do things that is not so nice, like pointing to a absolute fixed memory location etc.
wlan.TXbuffer[8] =*(char*)0xFFDC;
The (char*) typecast the hex value as a pointer, the first * is to show we want the value stored at this location eg a pointer.
Union's can give you all encompassing word access to a group of bits in a structure for example.
You don't have to use brackets all the time, if it's just a single instruction after a if, it's not really needed.
Here is my isr that allows any two parts of ram and/or flash to be spit out as one block and when done wakeup main.c event machine.
Is it possible to even buy a 4 bit MCU now a days?
Certainly C can be used to program even very memory constrained 8 bit machines.
On most 8 bitters, hand assembly language is the best, but C can work very reasonably.
On that note, so can VM / Intrepeter type languages. Check this one out: https://github.com/dschmenk/PLASMA
In many ways, it's similar to the approach Chip took on SPIN and is taking on SPIN 2. Tune the language for the environment, and a VM / intrepeter can perform nicely. Of course, Javascript is showing us this all the time now too. I'm a fan of this approach.
"Not so nice..."
Well, this is most of the controversial discussion in a nutshell. Where hardware specifics are involved, there are going to be trade offs. Portability, has zones of applicability. Maximizing hardware resources isn't going to be generally portable, though C does offer a consistent way to understand what was done. That's a net good.
What happens with the information can be very portable. At least that's how I see it.
The alternative is a separate I/O address space and special I/O instructions to access it. In which case you can't do it in C at all.
The not so nice part in the example given is using a magic number in the code rather than giving it a name with an #ifdef or hiding that mess in a macro.
See example here: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka3750.html
What does is that basis being associated with the number.
The prices that are shown, are also not 'use me for new designs prices'
Here is a 2014 data sheet for a 4b MCU, so they are in active development, for some apps (this one remote control)
http://www.coreriver.co.kr/data/product/ATOM130/CR-SS-AT130-V0.1.pdf
However, the cheapest readily/recently available MCUs do tend to now be 8-bit parts like EFM8BB1 or N76E003 or STM8 or ATTinyxx - Plenty on findchips sub 40c and some sub 30c.
New STC8F parts showing here... http://www.stcmcu.com/STC8F-DATASHEET/STC8-STC15-AD.pdf
(prices on RHS, with 1.00¥ ~ US$0.15, and that 0.60¥ is under US$0.10c(!), volume unclear ).
Indeed! Never liked the special I/O addressing model for those kinds of reasons as well as it seems to require an extra instruction, or setup. Often, those don't matter. But, when they do, it's a loss in potential functionality / capability or performance. Sort of like doing it through an I/O interface is. 6502 / 6522, for example. Memory direct boils down to an instruction. Read, write, or RW (modify). Otherwise, it's a few, setup registers, or values, pass them, or just execute the appropriate I/O instruction.
Now, in P2, we've got a hybrid. One can write registers to do simple things, bit bang. Or, there are smarts there, so we offload it and work with results. (Smart pins, Streamer vs registers, or SETDACx instructions) Will be interesting to see how all that plays out.
One nice thing is being able to encapsulate it all. Make the stubs, then populate with assembly language. Once that's done, the rest is ordinary.
I've always boiled that down to, "needs assembly language anyway", true in a ton of cases, but the least true for memory mapped I/O, which can be done entirely within the language. Just looked and the C standards do not speak to I/O spaces at all.
For the cost, an 8 bitter may well make sense. We could improve on remotes for sure. Maybe someone should.
There are two good reasons to not use magic numbers like that:
1) If it occurs in many places in your code it's better to #define it. Then if that number ever has to change you only have one place to find and edit. Less chance of introducing bugs.
2) It's nice to have a meaningful name for such things. Rather than use the number and perhaps add a comment. Makes the code easier on anyone perusing it later.
All good software engineering best practice.
I guess the I/O address space and special I/O instructions made sense when memory address space was very small. Saves cluttering it up with I/O locations. Otherwise it seems to be a pain to me.
Yeah, I started searching around for 4 bit MCU's. Looks like there are billions of them being made.
Might be a fun little challenge to program one. Never done a 4 bitter before. Although I guess they are not so easy to buy. They are probably one time programmable or even mask programmable. They probably have shitty unavailable dev tools.
I did once see an Intel 4040 on a board in a companies stock room back in 1980....
Me too. Even a lowly 6502 / z80 address space is large enough to make that gain marginal at best. And, where that's an issue, paging / banking is mature, easily done.
Maybe a little 4 bit device, like what jmg linked is a good use case. That remote chip is stripped down to the nubs needed.
...though I would totally upgrade that soc design to 8 bits and improve what can be done on a remote control. 6502 remotes! Why not? I'll bet the quantity costs can be close and if there is something needing a bit of an improvement, it's ***** remote controls. Hate the things. My TV lets me use a phone, and doing that actually makes a ton of sense. Didn't think it would.
The humble Remote Control has quite a wide range of CPU solutions...
From the simple 4-bit 'keyboard scan and send a code', up to the Learning Remotes with LCD displays, which can go well over 64k of CODE space.
Examples:
Single remotes : (all 4 bit from this vendor, some new ones on road map, smallest is 512B code, 16 nibbles RAM)
http://www.abov.co.kr/en/index.php?Depth1=3&Depth2=2&Depth3=1&Depth4=1
Universal remotes ( moving to 8 bit MCU, as large as 128k Code, 8k RAM )
http://www.abov.co.kr/en/index.php?Depth1=3&Depth2=2&Depth3=1&Depth4=2
It complicates the CPU.
It takes up valuable pins on a package.
It complicates external address decode logic.
What does it have going for it?
Just like MCUs with separate RAM and ROM allow each to be larger.
Yes, it complicates the MCU, but memory was quite costly when many of these were thought of.
On some designs, the CODE space was also slower than IO, so it made sense to not cripple your IO down to code-speed.
MCUs like the 8051, even go one step further and they have two I/O address spaces!
One is byte-based, and the other is bit-based.
Separate opcodes for each, but the bit-based IO is very efficient and compact, and easier to read as a bonus.
(The Byte-based IO shares opcodes with Data access, and the Bit-based IO shares opcodes with boolean data, so do you then argue it is all memory mapped, and not separate IO ? )
I think ARM MCUs try to fix their IO Space problems, by throwing larger address space to it. Some models define an address per bit. In a 32b MCU you have so much address space, you can be less efficient with it
A quick look at a very tiny system, the Atari 2600, shows 128 bytes of RAM and a 4K code and data space. Needs real time I/O too. An extra instruction space and or cycles needed for it would have killed the potential, which proved far more than many expected.
While a bit space is easy to read, it's got no real advantage over a memory mapped one and the bit ops one would use to do things.
COBOL-Programmer know that it is COBOL.
First of all there are more man-years programmed in COBOL then in C.
Second of all COBOL-Sources are much larger than the comparable C-Source.
Third of all COBOL programs usually run for decades not years of dev-cycles.
But there is hope for C since the worlds best Open-Source COBOL compiler (GnuCOBOL) is transpiling COBOL-Source to C-Source and slowly penetrating the Market, converting Mainframe COBOL to run off of Mainframes.
It is just so that COBOL is not so visible in the Internet as the companies using it (mostly Financial/Military/Government) do not want at all to have their stuff discussed in the Net.
But you all can be assured that - without any doubt - there are way more Lines Of Code written in COBOL as any other computer-language in existence.
Nothing beats the verbosity of COBOL.
Enjoy!
Mike
If the COBOL compiler was written in C, does that prove, or disprove the title "The world is still powered by C" ?
Lets reality check some active language stats :
I note GnuCOBOL has close to 10,000 downloads in 12 months, running < 1000/mo
FreePascal, runs at close to 10x that 106,000/yr
FreeBASIC has 23,380/yr
MinGW (A native Windows port of the GNU Compiler Collection (GCC) )has 28,149,618/yr
As to atomically toggle i/o pins etc, they did create separate IO memory maps.
There is also bit-banding, aliased addresses that is 8x larger than actual real area (so it's only used for registers and ram etc.)
you can reach a individual bit for each byte you poke/peek.
All arm cortex manufacturers supply their own CMSIS file, a hardware abstraction layer standard.
assembly is fine for smaller project, but once it get large having names and subfield names for structures really helps to not get lost.
And will only take a couple of days to port the code to another mcu family, doing that with pure asm would be a nightmare.
First: True; because COBOL takes so much longer to code than C.
Second: True, because COBOL is about 5x as verbose as C.
Third: True, because systems running COBOL are significantly slower than more modern ones running C.
So I agree with you, but probably not for the reasons a COBOL programmer would like.
(I wrote COBOL in college for a while, and I was really not a fan)
Yes, as stated before, the COBOL user are not represented much in the Internet, GnuCOBBOL tries to change that.
But say MicroFocus, one of the main Players on COBOL-Compilers (and not much else) was able to buy Novell, Suse and other stuff, since earning more money as they where able to spend with sense.
@David,
"How much of that COBOL code is running in embedded systems?"
Not much, yet, but I am counting on getting PropGCC to help me there.
With Hercules you can already run a IBM/370 mainframe with MVS and JCL on your cell-phone.
But biased as I am I can see lots of usage of COBOL on a embedded system.
First COBOL has very nice features to use text based screens for user interaction. Its build in into the core language, see Screen Section.
It has very nice reporting/printing options, also included in the core language, see Report Section.
It has basic Database features build in to support sequential and indexed files (think ISAM) build in into the core language.
And it is (was?) made to run on very memory constrained Hardware.
P1 was still to small, but P2 will not be to small.
Enjoy!
Mike
smile. A CIL byte code interpreter on the P2 would be fantastic.
But I guess, GCC has more support here as that evil MS stuff.
Currently it is not the memory constrains hindering me to use COBOL with PropGCC but the lack of dynamic linking.
I am also begging for support of static linking on the GnuCOBOL front, but with somehow slow results. So static linking is supported but not yet for the main runtime system.
wait and see,
Mike
CIL is freely usable now, see
https://en.wikipedia.org/wiki/Common_Language_Infrastructure
https://en.wikipedia.org/wiki/Common_Intermediate_Language
The bytecodes themselves are here
https://en.wikipedia.org/wiki/List_of_CIL_instructions
and more background
https://en.wikipedia.org/wiki/Mono_(software)
yes, I know all that, but I even get flak here because my username starts with MS.
And that are just the initials of my real name. Nothing to do with Microsoft.
But the pendulum swung from "sad PropTools is Windows only" to "we need a platform independent Software to support Window and Linux" to "If you want to run PropWare and SipmleIDE you need to install Linux on your new Windows10 Laptop".
Just great, isn't it?
Mike
.