I'm still waiting for an example of that huge cross-platform application that everyone knows and loves that is written in assembler.
To say that high level languages are only suitable for small programs and that assembler is best for large projects is basically nuts. There I said it. Are you trying to wind us up or what?
For sure all of your criteria for good assembler code apply equally well to coding in some high level language.
This may sound strange but I usually write the comments for a program first.
Then I go back and write the code that the comments describe. I know, it seems
backwards but believe me it actually makes coding easier.
This is NOT backwards, the other way around is backwards. The folks that "code backwards" and have success must be VERY clever, but they are still doing it the hard way. Which is fine, it only causes problems if there are multiple people working on the code, or if the code ever has to be updated or modified. Unfortunately, many folks falsely believe they are "saving time", in fact this is a major contributing factor to schedule slip, bugs, and rework. They end up paying the cost of development mulitple times for the same work, I've seen 10x.
Doing the comments first (that is, traceable requirements) before writing the code is the equivqalent of "measure twice, cut once"; writing the code and skipping the comments (and usually the formal requirments as well) is the equivalent of "measure once, cut yourself".
Can not today give you one that "every one knows and loves". The over educated under taught psudo-engineers, that are good at product pushing are taught how not to code and hate assembly. Now if you are wiling to take a look back at the mid to late 80s I can give you many very popular (at that time) examples.
Also:
MinuetOS is written 100% 64Bit assembly and is coming close to a modern OS, it does have all the basics down, just needs the add ons.
The over educated under taught psudo-engineers, that are good at product pushing are taught how not to code and hate assembly.
Really? Is there some kind of assembly programming conspiracy in higher education? Someone better look into that!
Seriously, Heater's question is valid and I asked a similar question. I don't think anyone doubts the merits of assembly. I do feel that some of your opinions are hard to digest. Hence, you're asked to prove your point.
Mike G:
I do agree with the validity of the intent of the question as I under stand it. I do not agree with the question as asked having validity. I see the intent of the question as being "Please point out an example of a good large program written in assembly", though he asked about a popular program, popular does not necessarily equal good. I have pointed out at least 5 or 6 in various threads that meet the terms of good and large assembly language programs, though they are unfortunately not popular.
As to the understanding of my view: TO CLARIFY:
I am not saying that assembly is great code, any more than any other language.
I am saying that most larger projects in HLLs tend not to follow the rules that I mentioned and thus increase over all development time and increase the risk of creating buggy code. The rules mentioned could be equally applied to any HLL and produce the same results.
I have pointed out at least 5 or 6 in various threads that meet the terms of good and large assembly language programs, though they are unfortunately not popular.
Why are these 5 or 6 assembly programs not popular?
I am saying that most larger projects in HLLs tend not to follow the rules that I mentioned and thus increase over all development time and increase the risk of creating buggy code
Give me an example of a large project. I deal mostly with multi-platform distributed business applications. I'd pull my hair out if had to write everything is assembly.
Fun thread so far. Here's a very interesting data point: Just picked up a Lenovo laptop to replace my older one. (great machines, but a bit pricey) Anyway, it comes with some power management and system maintenance type tools. In generations past, those tools were written in some standard HLL, performing well. This generation is HORRIBLE. The power management functions are off the charts good. Those are, of course, assembly language bits, with other code wrapped around them to provide the user with some interface. In a modern OS, this is perfectly ordinary, right?
These tools are written in something with so many layers of abstraction that they take many seconds to launch, and have interaction response times on the order of a second!! That means mousing over something, for example, or clicking, returns the result about a second later. This is on a new i7, high end CPU, that is no slouch. Ugly, ugly stuff.
So, there are some extremes here. Staying at the bare metal is one, and too many abstractions away from that is the other. Neither is really all that good.
We don't have larger scale projects in assembly language because it's too difficult to manage the interactions with them, and the other code we work with every day. We don't have good performing things written in very high level languages, because they can't deliver the interaction we need, though they often easily mesh with the other code we need to work with every day.
The common element here is that other code we need to work with every day. On a micro-controller scale, or just embedded in general, assembly actually makes a lot of sense. The scope of interoperability isn't so large that it can't be managed, and the power / performance / interaction / robustness requirements warrant writing code at a fairly low level. Driver type code, or "glue" code, often needs to be written, and if it's got to perform then it's likely to be assembly, or C, and if it's just got to connect the dots, perhaps a higher level thing is all that is needed.
Funny how the Prop I fits into that picture isn't it? It's not really possible to write pure assembly language programs on it, due to how it's designed! We have SPIN for the whole package, and we have PASM for the high performance parts, with LMM / C / PBASIC kind of splitting the middle.
On a macro scale, that is exactly how computing in general is!
We can't do everything at the lowest level, because we don't have one holistic entity directing development, nor do we have one holistic set of requirements. So then, everything from ASM to very high level macro / scripting type languages are needed, so that the tools needed to get things done are available at the level the problem exists at. Want to automate some basic things? No assembly code can realistically do that, but on Windows VB can, for one older example, and TCL / TK can on Unix, for another.
Doing graphics? That's gonna be C, plus some "assembly", right? Assembly might reside on the GPU, C / C++ probably runs on the main machine, with some assembly in-lined there for various things, depending.
Doing systems administration, or perhaps gluing enterprise applications together? Maybe PERL makes great sense, coupled with a few executables...
Create a new device? Assembly boot straps it, so that other higher level things can leverage it.
etc...
Another data point: So, I have this Lenovo machine, and a nice Mac Book Pro in my working pool of machines right now. The Apple is holistically designed. As a solution, it's potent. Size, power management, performance, UI, etc... are all very well optimized, because there is one holistic vision, and requirements, and code base, minus the open bits they build on. The Lenovo isn't that way, and it shows. Each has it's strengths, cost benefits, capabilities, etc...
Finally, I look at things like portable phones running JAVA, or some other bloated thing that I would normally consider morbid on such a device, just because it runs counter to what I know is possible.
On one hand, I think it's nuts!! If there is a platform where the "Apple" kind of approach works, it's gotta be a phone, but then, development time, and that need to interoperate at a macro level makes that a difficult call. Do we refine it, only to be aced out by the people that didn't, but put a useful, if less than optimal device in other people's hands? I know what I would do, and that is get it done, work hard, ship, and score the market share. That's what everybody is going to do, but the purists, and there is nothing wrong with purists. Jobs is a purist, and Apple gets there, but look at the effort it takes, and the build up of resources, alternative cash flow, supporting business, etc...! Huge.
We can't have, "I have the APP for that!" in assembly language land. What does that mean? Well, here's another perspective. There are two phones. One is simple, robust, and runs forever on a charge, does not crash, but only makes and receives phone calls and basic text. The other does crash, doesn't run as long, but can connect you to the world, and do lots of interesting stuff for you. Which one do people buy and why?
Because of that, the primary merit of assembly language is the ability to boot strap devices into the computing eco-system, perform the lower level "glue" tasks needed to extract robust and high performance out of devices where needed, and for raw compute tasks that are structured. It's good for making older stuff perform at a useful level today too. Any retro computing fan is sensitive to that.
BTW: SPIN runs about as fast as assembly language does on my old Atari 800 XL. Benchmarked it. Some perspective huh?
That said, the other merit of assembly language is learning it brings people close to computing. All these machines do is add numbers together, perform boolean logic operations on them, and copy bits around. That's it! The high level language programmer may never come to know that. The assembly language programmer knows it cold. IMHO, that perspective is good for everybody, even if they just use a computer, or do light scripting, because it's like the physics of the world. Knowing the core dynamics of how things work is useful in a lot of contexts, and that is exactly what assembly language represents in the scope of computing.
We can't have, "I have the APP for that!" in assembly language land. What does that mean? Well, here's another perspective. There are two phones. One is simple, robust, and runs forever on a charge, does not crash, but only makes and receives phone calls and basic text. The other does crash, doesn't run as long, but can connect you to the world, and do lots of interesting stuff for you. Which one do people buy and why?
Very interesting post, potatohead! I especially agree about the phone thingy. Slightly off topic, but I've never understood why people want to carry around these huge iPhone monstrosities. I carry a phone that is cheap (about 1/10 the price), small (about 1/4 the size), light (about 1/3 the weight), robust & reliable (I've had it for about 3 years so far), and with a battery life so long (2 weeks) that if I forget to charge it for a week it doesn't matter. Result? I am always contactable.
On the other hand, many of my friends carry iPhones - bulky, complex, fragile, unreliable, and with such a short battery life that they are basically useless for at least part of each day. Result? They are constantly out of contact, despite spending (literally) thousands of dollars more than I do.
The only conclusion I can reach is that the telcos love these new expensive and unreliable phones, because people end up playing "phone tag" trying to contact each other on really expensive call plans that they've paid for because of all the cheap data rates they get for their "apps". That's why they are happy to subsisize the cost of them (I used to work for a telco, so I know a bit about the various methods they use to "encourage" you to spend more money than you ever imagined you would).
If I want the apps or the games, or the GPS or the camera, I'm happy to carry another device. When I want a phone, I carry a phone.
Ross, like you I carry a cheap phone because I want only a phone. Most calls still end up going to my voice mail even though I know I was not on the phone and had good signal when the call was made. I am convinced this is because the telcos have far more "yappy subscribers" than the system can handle so the calls end up in voice mail and I am not notified for an hour or two.
Because I go into secure areas where cameras are not permitted I wanted a phone without a camera. This seems to be unobtainable now. I will instantly switch to the first telco that offers me a basic NO FRILLS phone with guaranteed reliable service. What we have now is Smile for the teeny boppers..
Ross, like you I carry a cheap phone because I want only a phone. Most calls still end up going to my voice mail even though I know I was not on the phone and had good signal when the call was made. I am convinced this is because the telcos have far more "yappy subscribers" than the system can handle so the calls end up in voice mail and I am not notified for an hour or two.
Because I go into secure areas where cameras are not permitted I wanted a phone without a camera. This seems to be unobtainable now. I will instantly switch to the first telco that offers me a basic NO FRILLS phone with guaranteed reliable service. What we have now is Smile for the teeny boppers..
Hi kwinn,
I could tell you about how telcos deliberately degrade network performance for low-value subscribers (essentially diverting calls to such subscribers to voice mail even when they don't need to) while they offer premium service to high-value subscribers - but if I did, I would then have to shoot both you and myself!
I have a no-contract "Go Phone". 'Hardly ever use it, except for the occasional outgoing call, say if I'm running late waiting for a drawbridge or ferry. My minutes always expire before I use them up, so I just keep a "gift card" on hand to refill them the next time I need to use it. I never use it for incoming calls and don't even know what my number is, much less give it out. I've had to replace the SIM card twice, because it expired for lack of use. I really hate the dang thing and hate the notion of always having to be "in touch." For those rare times that I have to use it, it's a real convenience. The rest of the time, just owning one is an annoyance.
You don't need to tell me. As I said, I am convinced it is purposely done to force me to spend more than required. A few of my friends and I occasionally discuss the problem when we get together with an eye to getting revenge on the telcos. We have come up with quite a few ideas so far, but no legal or quasi legal ones yet.
@ Phil
Unfortunately my phone was intended mainly for clients to reach me when they had problems. I use it most often to return calls and occasionally to make calls if I am running late or to set up appointments. Typically I use less than 300 minutes per month. For everything else I prefer email so there is a record of everything.
Yep, you do sound a bit like Andy Rooney. I probably do as well.
On this phone topic:
I always find it interesting how many problems most have with basic plans. I have a unlimited prepaid no contract plan and have no trouble (I presume because I have unlimited minutes/texts/web, so they can not get any money by increasing the usage).
Because the translation tools have not (as far as I know) been written for the current generation of CPUs, I am unable to give a current portable example.
A couple of good and large assembly language projects, this list is just a beginning.
From 87 to 91 I worked in Digital Equipment Corporation's VMS operating system group. The bulk of that OS was written in VAX Macro Assembler. Initially this locked it to the VAX platform which became price uncompetitive compared to Sun's SPARC in the late 80's. So DEC undertook a porting effort which required building a cross assembler which translated VAX instructions to Alpha RISC instructions. Not all VAX instruction streams could be cross compiled, so DEC had a team of engineers (including me) cross compile every module and re-write the bits that wouldn't work. One of the cryptography modules was a lost cause and I re-wrote it in C. Being masochistic the user mode components of VMS were written in a language called BLISS (which was anything but bliss to use). A significant but slightly less intensive effort was required to port those bits as well. The whole project took two years start to finish.
Meanwhile DEC's version of Unix was written in C and lets say they had a much easier time porting to DEC's Alpha RISC.
Davidsaunders, to an extent I agree with what you are saying about working in assembly. I prefer using assembly to HLL's, for working with small microcontrollers.
In the early 70's I wrote a preprocessor and dozens of macros that were used to write order entry, inventory, billing, etc programs for a mini that previously had been used only for process control, so I know writing large programs in assembly is possible.
I was also involved in using translating software to go from 8080/Z80 to the 8086/88 in the 80's, so I know that is also possible. The caveat there is that no software package can do a perfect job of translation, even on cpu's as closely related as x80 and x86 were at that time. Much more difficult for unrelated cpu's.
The standard arithmetic, logic, indexing, I/O, and program flow instructions are not that difficult, but the more specific ones for I/O, memory management, interrupts, and such are a tougher proposition. Even this can be overcome for a translator to go from cpu A to cpu B, even though it may require a bit of hand coding.
The problem I see is that the translation process is far from perfect, and each time a program is translated it accumulates extra instructions necessary due to instruction set differences that require multiple instructions to replace a single instruction on the older cpu. A well written C or other HLL compiler should not have that problem.
I am aware that both translated and compiled HLL programs will require some code changes, but because HLL's have well defined standards compared to assembler I expect those changes would be fewer for the HLL.
Did you use Intel's conv86 utility to translate 8085 asm code to 8086?
I remember running all our process control code through that utility. The result was that the code got twice as big and ran pretty much at half the speed as the original on the 8085's. Even though the 8086 was 16 bits and at least twice the clock rate? That was a bit of a problem as our production process built the entire product in 60ms and there were a 100 or so of the product passing through the machine at a time.
Turned out the problem was that in order to set the flags in the same way as the 8085 a lot of extra instructions were put in.
Now there was an option to turn off such strict flag setting which then produced a more or less instruction to instruction translation. But then you had to inspect all the code very carefully to see if that broke anything.
All in all this idea of translating assembler code around from architecture to architecture is just not workable in general.
Heater:
I am familiar with the difficulties in porting Assembly code, it becomes much simpler if the programmer thinks about how he/she uses the instruction set of the CPUs. Though I have found that often times it is much more difficult to port HLLs from one API to another, this can be a bigger problem than the instruction set.
Seems to me, if one limits themselves to a more portable subset of instructions, the major benefit of assembly is lost; namely, the ability to directly control the hardware at it's peak potential. Given that, why bother, when a good HLL can accomplish the same thing, with the added benefit of facilitating a much higher level of interoperability?
And then there is the overhead of learning and applying all of those rules. How is that significantly different from the overhead one incurs on a HLL, where perhaps some inline assembly might be required, given specific use cases?
I don't see any material difference, leaving then the benefits. In the HLL, different bodies of code may interoperate easily, where in the ASM environment, they won't, more often than not.
Again, if there is one holistic set of requirements, assembly can shine. Where that's not true, it costs more than it delivers, generally speaking, which is why we ended up with C.
And C got us UNIX, and as far as I am concerned, those two are probably the most potent efforts in computing there ever was. The benefits are HUGE, as one look at the body of open code shows today. The use value on that exceeds the speed and efficiency advantages assembly has, due to the reuse and interoperability and portability that assembly does not have, without a very significant and often dubious effort.
Patatohead:
Why bother?
Simple; If you code in assembly using a reasonable set of instructions that is portable over the target range of CPUs, you can still do many things that can not easily be done in HLLs. For example if the target is modern desktop computers, on the PPC you have altivec ops to port to SSE2 on the x86, to port to what ever. The big issue is the use of flags that are not universally available, or using multiple stacks.
This said; I will repeat all the advantages (other than optimization, and debugging) of assembly can be accomplished in most HLLs provided that the developers use the same reasoning and methods that tend to come more naturally in assembly.
So then, build in the HLL, inline the assembly, or provide it as a relocatable module, and take that dev time saved by working in the easy environment, reusing common code, and build out both ASM targets. Seems to me, that puts the ASM where it's strong, the developer gets nearly all the advantages, and sees a effort that's quite easily modified, interoperated with, and that can run across multiple operating systems, etc...
When a new platform appears, it's really only necessary to deal with the CPU specific bits. Same for a CPU extension.
I would not have even started this thread if not for the fact that most seem to write there HLL code in a manner that does not consider the simple rules:
Comment header for the parameters taken by every Procedure, and any used resources (including stack), and include all functions/procedures called including all functions/procedures called by the functions you call (and remember to include the total stack usage in detail).
Include a thorough description of the functionality of each procedure in its header comment.
Comment every line of code, thoroughly.
Use good resource tracking.
Comment all system calls and resource known to be used by them (include these in the function/procedure header).
Keep a separate document containing all of this information.
If people would fallow some simple rules development time would be greatly reduced for any project and there would be fewer bugs to worry about. These rules in part are
1) Comment header for the parameters taken by every Procedure, and any used resources (including stack), and include all functions/procedures called including all functions/procedures called by the functions you call (and remember to include the total stack usage in detail).
2) Include a thorough description of the functionality of each procedure in its header comment.
3) Comment every line of code, thoroughly.
4) Code in good resource tracking.
4) Comment all system calls and resource known to be used by them (include these in the function/procedure header).
5) Provide detailed comments at the head of each module.
6) Write the comments first in a way that describes the procedure in order, then write the code between the comments.
7) Keep a separate document containing all of the information on each procedure and its usage, etc.
Here's a very off the beaten path, but interesting data point for this discussion.
The Atari VCS, or more commonly known as "the 2600" is a machine that only has 128 bytes of RAM, and no screen buffer hardware. The display is drawn one line at a time, in real time, by the CPU, while also managing to provide enough compute for game mechanics. It has a 4K ROM program space, by default. Now, both of those things have been somewhat extended, but that isn't really material to what I am trying to say here.
Traditionally, this is a all assembly language machine. There isn't room for anything else, and the performance needed is barely there. Traditional development is to build a assembly kernel that provides the display elements needed, filling in with game logic, controller I/O, sound, etc...
For many years, it was thought that a higher level language would not make any real sense, because of the severe constraints the machine has.
Not true.
Today, there is a HLL called Batari Basic, and it literally is a compiled BASIC that runs on the machine fast enough to operate in real time, like the assembly language programs do. The disadvantage it has, is the "kernel" is somewhat standardized, limiting what a person can do graphically, but the rest of it is actually quite brilliant!
Clearly a pure assembly effort will be better than the HLL effort, but... Here's the interesting metric:
The number of games produced in BASIC is huge! There have been lots of them, some good, many ok, some bad. But, they were done by rather ordinary people, who really don't have what it takes to build up a solid assembly language effort. My own personal experience was favorable. I was able to write something that actually did work, was playable, and share it with others in a few hours! That was on the first go, after seeing the effort released. My game was the first one that was playable at the time. (breakout clone, that was simple, done to demonstrate what was possible)
Building up a assembly language kernel, never mind a real playable effort, took many days.
Today, the environment is being used for some commercial quality efforts, and one of the great features is being able to directly in-line assembly. As I progressed with the environment, I would drop assembly language bits in for speed, or because it was just easier, due to some CPU trick, or other that made sense, given the small program space.
Even on a small scale device, a good compiler, and well designed and lean HLL, can deliver assembly code that makes sense, and the higher level environment means freeing mental space that can be used for either more rapid development, or realization of higher level constructs.
The other interesting observation I've seen is platform extensions. Some clever soul has figured out how to feed programs to this machine with a small micro-controller. It is now possible to write programs that use more RAM, load from SD card, and do other very interesting things. The progression of code has been considerably more rapid with that environment in place. Assembly, as always, boot straps that, but it is the HLL that really puts it to wider spread use.
I take it you have not yet seen the change of direction for this thread?
That is a rather interesting perspective on the issue of asm vs. HLL, which was never meant to be the topic. Though I do enjoy this aside as the 2600 is a difficult system to program for.
Well, I have to tell you, I think one of the best attributes of the HLL is not having to document in that level of detail.
When things are too rigid, it's just not productive.
Look at SPIN on the prop! IMHO, it's largely self-documenting. The language is clear, and so long as people don't go completely nuts on variable names and such, it's something a person can read, come to understand fairly easily, and then make use of, or connect to.
There is a return on investment in play here that really cannot be ignored. If all the effort you describe is applied all the time, the development is "thick", yielding a low efficiency product. On the other hand, if it's just done rapidly, with no thought at all, the product is fast, but perhaps not robust, or reusable.
Balance is needed, and where that balance is optimal varies significantly with the requirements in play at the time.
There isn't any sure fire set of rules in these things. Time is finite, and sometimes things need to get done.
All the documentation you describe is not a bad thing, but it is a thick thing, and where there isn't reuse, it's a wasted thing.
Consider the case where code is written lean vs thick. Let's say the lean code takes a fraction of the time the thick code would. Later on, a small portion of the code is targeted for reuse, or integration. Compare and contrast the time spent parsing and testing, with the time spent avoiding parsing and testing. There is that.
More importantly, compare the time required to anticipate any and all possible integration and reuse efforts, and the impact of that on overall development time and feature sets built, vs that development time and feature sets possible without having to always manage so many considerations, many of which will never pay off.
My real world experience with this kind of thing clearly favors lean methods, and a parallel thing in MCAD, using constraints and equations to define physical geometry, plays out exactly the same way.
On larger scale projects, be it simple programming, or manufacturing, etc... managing this balance is a higher level consideration that will determine the overall success of the project, and it's return on development / engineering efforts.
So again, we are at the purist vs pragmatist place in the discussion, where there is no resolution possible without first framing it all in the context of something material to be done.
It is good to understand both extremes, but they are not means to a greater end, just tools one can use to reason what makes best sense, given the requirements at hand.
Edit: I see the title change now. My post here works for that.
It's been my life experience that when we see a "if only people would" kind of case, it's idealized and not inclusive enough to warrant that level of general consideration; otherwise, people would be doing whatever it was.
Comments
I'm still waiting for an example of that huge cross-platform application that everyone knows and loves that is written in assembler.
To say that high level languages are only suitable for small programs and that assembler is best for large projects is basically nuts. There I said it. Are you trying to wind us up or what?
For sure all of your criteria for good assembler code apply equally well to coding in some high level language.
This is NOT backwards, the other way around is backwards. The folks that "code backwards" and have success must be VERY clever, but they are still doing it the hard way. Which is fine, it only causes problems if there are multiple people working on the code, or if the code ever has to be updated or modified. Unfortunately, many folks falsely believe they are "saving time", in fact this is a major contributing factor to schedule slip, bugs, and rework. They end up paying the cost of development mulitple times for the same work, I've seen 10x.
Doing the comments first (that is, traceable requirements) before writing the code is the equivqalent of "measure twice, cut once"; writing the code and skipping the comments (and usually the formal requirments as well) is the equivalent of "measure once, cut yourself".
Can not today give you one that "every one knows and loves". The over educated under taught psudo-engineers, that are good at product pushing are taught how not to code and hate assembly. Now if you are wiling to take a look back at the mid to late 80s I can give you many very popular (at that time) examples.
Also:
MinuetOS is written 100% 64Bit assembly and is coming close to a modern OS, it does have all the basics down, just needs the add ons.
Seriously, Heater's question is valid and I asked a similar question. I don't think anyone doubts the merits of assembly. I do feel that some of your opinions are hard to digest. Hence, you're asked to prove your point.
I do agree with the validity of the intent of the question as I under stand it. I do not agree with the question as asked having validity. I see the intent of the question as being "Please point out an example of a good large program written in assembly", though he asked about a popular program, popular does not necessarily equal good. I have pointed out at least 5 or 6 in various threads that meet the terms of good and large assembly language programs, though they are unfortunately not popular.
As to the understanding of my view:
TO CLARIFY:
I am not saying that assembly is great code, any more than any other language.
I am saying that most larger projects in HLLs tend not to follow the rules that I mentioned and thus increase over all development time and increase the risk of creating buggy code. The rules mentioned could be equally applied to any HLL and produce the same results.
Give me an example of a large project. I deal mostly with multi-platform distributed business applications. I'd pull my hair out if had to write everything is assembly.
These tools are written in something with so many layers of abstraction that they take many seconds to launch, and have interaction response times on the order of a second!! That means mousing over something, for example, or clicking, returns the result about a second later. This is on a new i7, high end CPU, that is no slouch. Ugly, ugly stuff.
So, there are some extremes here. Staying at the bare metal is one, and too many abstractions away from that is the other. Neither is really all that good.
We don't have larger scale projects in assembly language because it's too difficult to manage the interactions with them, and the other code we work with every day. We don't have good performing things written in very high level languages, because they can't deliver the interaction we need, though they often easily mesh with the other code we need to work with every day.
The common element here is that other code we need to work with every day. On a micro-controller scale, or just embedded in general, assembly actually makes a lot of sense. The scope of interoperability isn't so large that it can't be managed, and the power / performance / interaction / robustness requirements warrant writing code at a fairly low level. Driver type code, or "glue" code, often needs to be written, and if it's got to perform then it's likely to be assembly, or C, and if it's just got to connect the dots, perhaps a higher level thing is all that is needed.
Funny how the Prop I fits into that picture isn't it? It's not really possible to write pure assembly language programs on it, due to how it's designed! We have SPIN for the whole package, and we have PASM for the high performance parts, with LMM / C / PBASIC kind of splitting the middle.
On a macro scale, that is exactly how computing in general is!
We can't do everything at the lowest level, because we don't have one holistic entity directing development, nor do we have one holistic set of requirements. So then, everything from ASM to very high level macro / scripting type languages are needed, so that the tools needed to get things done are available at the level the problem exists at. Want to automate some basic things? No assembly code can realistically do that, but on Windows VB can, for one older example, and TCL / TK can on Unix, for another.
Doing graphics? That's gonna be C, plus some "assembly", right? Assembly might reside on the GPU, C / C++ probably runs on the main machine, with some assembly in-lined there for various things, depending.
Doing systems administration, or perhaps gluing enterprise applications together? Maybe PERL makes great sense, coupled with a few executables...
Create a new device? Assembly boot straps it, so that other higher level things can leverage it.
etc...
Another data point: So, I have this Lenovo machine, and a nice Mac Book Pro in my working pool of machines right now. The Apple is holistically designed. As a solution, it's potent. Size, power management, performance, UI, etc... are all very well optimized, because there is one holistic vision, and requirements, and code base, minus the open bits they build on. The Lenovo isn't that way, and it shows. Each has it's strengths, cost benefits, capabilities, etc...
Finally, I look at things like portable phones running JAVA, or some other bloated thing that I would normally consider morbid on such a device, just because it runs counter to what I know is possible.
On one hand, I think it's nuts!! If there is a platform where the "Apple" kind of approach works, it's gotta be a phone, but then, development time, and that need to interoperate at a macro level makes that a difficult call. Do we refine it, only to be aced out by the people that didn't, but put a useful, if less than optimal device in other people's hands? I know what I would do, and that is get it done, work hard, ship, and score the market share. That's what everybody is going to do, but the purists, and there is nothing wrong with purists. Jobs is a purist, and Apple gets there, but look at the effort it takes, and the build up of resources, alternative cash flow, supporting business, etc...! Huge.
We can't have, "I have the APP for that!" in assembly language land. What does that mean? Well, here's another perspective. There are two phones. One is simple, robust, and runs forever on a charge, does not crash, but only makes and receives phone calls and basic text. The other does crash, doesn't run as long, but can connect you to the world, and do lots of interesting stuff for you. Which one do people buy and why?
Because of that, the primary merit of assembly language is the ability to boot strap devices into the computing eco-system, perform the lower level "glue" tasks needed to extract robust and high performance out of devices where needed, and for raw compute tasks that are structured. It's good for making older stuff perform at a useful level today too. Any retro computing fan is sensitive to that.
BTW: SPIN runs about as fast as assembly language does on my old Atari 800 XL. Benchmarked it. Some perspective huh?
That said, the other merit of assembly language is learning it brings people close to computing. All these machines do is add numbers together, perform boolean logic operations on them, and copy bits around. That's it! The high level language programmer may never come to know that. The assembly language programmer knows it cold. IMHO, that perspective is good for everybody, even if they just use a computer, or do light scripting, because it's like the physics of the world. Knowing the core dynamics of how things work is useful in a lot of contexts, and that is exactly what assembly language represents in the scope of computing.
Very interesting post, potatohead! I especially agree about the phone thingy. Slightly off topic, but I've never understood why people want to carry around these huge iPhone monstrosities. I carry a phone that is cheap (about 1/10 the price), small (about 1/4 the size), light (about 1/3 the weight), robust & reliable (I've had it for about 3 years so far), and with a battery life so long (2 weeks) that if I forget to charge it for a week it doesn't matter. Result? I am always contactable.
On the other hand, many of my friends carry iPhones - bulky, complex, fragile, unreliable, and with such a short battery life that they are basically useless for at least part of each day. Result? They are constantly out of contact, despite spending (literally) thousands of dollars more than I do.
The only conclusion I can reach is that the telcos love these new expensive and unreliable phones, because people end up playing "phone tag" trying to contact each other on really expensive call plans that they've paid for because of all the cheap data rates they get for their "apps". That's why they are happy to subsisize the cost of them (I used to work for a telco, so I know a bit about the various methods they use to "encourage" you to spend more money than you ever imagined you would).
If I want the apps or the games, or the GPS or the camera, I'm happy to carry another device. When I want a phone, I carry a phone.
Yes, I know - how old fashioned!
Ross.
Because I go into secure areas where cameras are not permitted I wanted a phone without a camera. This seems to be unobtainable now. I will instantly switch to the first telco that offers me a basic NO FRILLS phone with guaranteed reliable service. What we have now is Smile for the teeny boppers..
Hi kwinn,
I could tell you about how telcos deliberately degrade network performance for low-value subscribers (essentially diverting calls to such subscribers to voice mail even when they don't need to) while they offer premium service to high-value subscribers - but if I did, I would then have to shoot both you and myself!
Ross.
Do I sound like Andy Rooney?
-Phil
You don't need to tell me. As I said, I am convinced it is purposely done to force me to spend more than required. A few of my friends and I occasionally discuss the problem when we get together with an eye to getting revenge on the telcos. We have come up with quite a few ideas so far, but no legal or quasi legal ones yet.
@ Phil
Unfortunately my phone was intended mainly for clients to reach me when they had problems. I use it most often to return calls and occasionally to make calls if I am running late or to set up appointments. Typically I use less than 300 minutes per month. For everything else I prefer email so there is a record of everything.
Yep, you do sound a bit like Andy Rooney. I probably do as well.
I always find it interesting how many problems most have with basic plans. I have a unlimited prepaid no contract plan and have no trouble (I presume because I have unlimited minutes/texts/web, so they can not get any money by increasing the usage).
The MinuetOS is rather impressive. Shame I don't have 64 bit machine to run it on.
Because the translation tools have not (as far as I know) been written for the current generation of CPUs, I am unable to give a current portable example.
A couple of good and large assembly language projects, this list is just a beginning.
Fantasm & LIDE: http://www.lightsoft.co.uk/Fantasm/fant.html
MenuetOS: http://www.menuetos.net/
BareMetal OS: http://www.returninfinity.com/baremetal.html
There is a 32 bit version of MenuetOS, See:
http://www.menuetos.org/M32.htm
Meanwhile DEC's version of Unix was written in C and lets say they had a much easier time porting to DEC's Alpha RISC.
I miss the old VAX, sob.
In the early 70's I wrote a preprocessor and dozens of macros that were used to write order entry, inventory, billing, etc programs for a mini that previously had been used only for process control, so I know writing large programs in assembly is possible.
I was also involved in using translating software to go from 8080/Z80 to the 8086/88 in the 80's, so I know that is also possible. The caveat there is that no software package can do a perfect job of translation, even on cpu's as closely related as x80 and x86 were at that time. Much more difficult for unrelated cpu's.
The standard arithmetic, logic, indexing, I/O, and program flow instructions are not that difficult, but the more specific ones for I/O, memory management, interrupts, and such are a tougher proposition. Even this can be overcome for a translator to go from cpu A to cpu B, even though it may require a bit of hand coding.
The problem I see is that the translation process is far from perfect, and each time a program is translated it accumulates extra instructions necessary due to instruction set differences that require multiple instructions to replace a single instruction on the older cpu. A well written C or other HLL compiler should not have that problem.
I am aware that both translated and compiled HLL programs will require some code changes, but because HLL's have well defined standards compared to assembler I expect those changes would be fewer for the HLL.
Did you use Intel's conv86 utility to translate 8085 asm code to 8086?
I remember running all our process control code through that utility. The result was that the code got twice as big and ran pretty much at half the speed as the original on the 8085's. Even though the 8086 was 16 bits and at least twice the clock rate? That was a bit of a problem as our production process built the entire product in 60ms and there were a 100 or so of the product passing through the machine at a time.
Turned out the problem was that in order to set the flags in the same way as the 8085 a lot of extra instructions were put in.
Now there was an option to turn off such strict flag setting which then produced a more or less instruction to instruction translation. But then you had to inspect all the code very carefully to see if that broke anything.
All in all this idea of translating assembler code around from architecture to architecture is just not workable in general.
I am familiar with the difficulties in porting Assembly code, it becomes much simpler if the programmer thinks about how he/she uses the instruction set of the CPUs. Though I have found that often times it is much more difficult to port HLLs from one API to another, this can be a bigger problem than the instruction set.
And then there is the overhead of learning and applying all of those rules. How is that significantly different from the overhead one incurs on a HLL, where perhaps some inline assembly might be required, given specific use cases?
I don't see any material difference, leaving then the benefits. In the HLL, different bodies of code may interoperate easily, where in the ASM environment, they won't, more often than not.
Again, if there is one holistic set of requirements, assembly can shine. Where that's not true, it costs more than it delivers, generally speaking, which is why we ended up with C.
And C got us UNIX, and as far as I am concerned, those two are probably the most potent efforts in computing there ever was. The benefits are HUGE, as one look at the body of open code shows today. The use value on that exceeds the speed and efficiency advantages assembly has, due to the reuse and interoperability and portability that assembly does not have, without a very significant and often dubious effort.
Why bother?
Simple; If you code in assembly using a reasonable set of instructions that is portable over the target range of CPUs, you can still do many things that can not easily be done in HLLs. For example if the target is modern desktop computers, on the PPC you have altivec ops to port to SSE2 on the x86, to port to what ever. The big issue is the use of flags that are not universally available, or using multiple stacks.
This said; I will repeat all the advantages (other than optimization, and debugging) of assembly can be accomplished in most HLLs provided that the developers use the same reasoning and methods that tend to come more naturally in assembly.
When a new platform appears, it's really only necessary to deal with the CPU specific bits. Same for a CPU extension.
I do agree with you 100% on that.
I would not have even started this thread if not for the fact that most seem to write there HLL code in a manner that does not consider the simple rules:
Comment header for the parameters taken by every Procedure, and any used resources (including stack), and include all functions/procedures called including all functions/procedures called by the functions you call (and remember to include the total stack usage in detail).
Include a thorough description of the functionality of each procedure in its header comment.
Comment every line of code, thoroughly.
Use good resource tracking.
Comment all system calls and resource known to be used by them (include these in the function/procedure header).
Keep a separate document containing all of this information.
If people would fallow some simple rules development time would be greatly reduced for any project and there would be fewer bugs to worry about. These rules in part are
1) Comment header for the parameters taken by every Procedure, and any used resources (including stack), and include all functions/procedures called including all functions/procedures called by the functions you call (and remember to include the total stack usage in detail).
2) Include a thorough description of the functionality of each procedure in its header comment.
3) Comment every line of code, thoroughly.
4) Code in good resource tracking.
4) Comment all system calls and resource known to be used by them (include these in the function/procedure header).
5) Provide detailed comments at the head of each module.
6) Write the comments first in a way that describes the procedure in order, then write the code between the comments.
7) Keep a separate document containing all of the information on each procedure and its usage, etc.
The Atari VCS, or more commonly known as "the 2600" is a machine that only has 128 bytes of RAM, and no screen buffer hardware. The display is drawn one line at a time, in real time, by the CPU, while also managing to provide enough compute for game mechanics. It has a 4K ROM program space, by default. Now, both of those things have been somewhat extended, but that isn't really material to what I am trying to say here.
Traditionally, this is a all assembly language machine. There isn't room for anything else, and the performance needed is barely there. Traditional development is to build a assembly kernel that provides the display elements needed, filling in with game logic, controller I/O, sound, etc...
For many years, it was thought that a higher level language would not make any real sense, because of the severe constraints the machine has.
Not true.
Today, there is a HLL called Batari Basic, and it literally is a compiled BASIC that runs on the machine fast enough to operate in real time, like the assembly language programs do. The disadvantage it has, is the "kernel" is somewhat standardized, limiting what a person can do graphically, but the rest of it is actually quite brilliant!
Clearly a pure assembly effort will be better than the HLL effort, but... Here's the interesting metric:
The number of games produced in BASIC is huge! There have been lots of them, some good, many ok, some bad. But, they were done by rather ordinary people, who really don't have what it takes to build up a solid assembly language effort. My own personal experience was favorable. I was able to write something that actually did work, was playable, and share it with others in a few hours! That was on the first go, after seeing the effort released. My game was the first one that was playable at the time. (breakout clone, that was simple, done to demonstrate what was possible)
Building up a assembly language kernel, never mind a real playable effort, took many days.
Today, the environment is being used for some commercial quality efforts, and one of the great features is being able to directly in-line assembly. As I progressed with the environment, I would drop assembly language bits in for speed, or because it was just easier, due to some CPU trick, or other that made sense, given the small program space.
Even on a small scale device, a good compiler, and well designed and lean HLL, can deliver assembly code that makes sense, and the higher level environment means freeing mental space that can be used for either more rapid development, or realization of higher level constructs.
The other interesting observation I've seen is platform extensions. Some clever soul has figured out how to feed programs to this machine with a small micro-controller. It is now possible to write programs that use more RAM, load from SD card, and do other very interesting things. The progression of code has been considerably more rapid with that environment in place. Assembly, as always, boot straps that, but it is the HLL that really puts it to wider spread use.
That is a rather interesting perspective on the issue of asm vs. HLL, which was never meant to be the topic. Though I do enjoy this aside as the 2600 is a difficult system to program for.
When things are too rigid, it's just not productive.
Look at SPIN on the prop! IMHO, it's largely self-documenting. The language is clear, and so long as people don't go completely nuts on variable names and such, it's something a person can read, come to understand fairly easily, and then make use of, or connect to.
There is a return on investment in play here that really cannot be ignored. If all the effort you describe is applied all the time, the development is "thick", yielding a low efficiency product. On the other hand, if it's just done rapidly, with no thought at all, the product is fast, but perhaps not robust, or reusable.
Balance is needed, and where that balance is optimal varies significantly with the requirements in play at the time.
There isn't any sure fire set of rules in these things. Time is finite, and sometimes things need to get done.
All the documentation you describe is not a bad thing, but it is a thick thing, and where there isn't reuse, it's a wasted thing.
Consider the case where code is written lean vs thick. Let's say the lean code takes a fraction of the time the thick code would. Later on, a small portion of the code is targeted for reuse, or integration. Compare and contrast the time spent parsing and testing, with the time spent avoiding parsing and testing. There is that.
More importantly, compare the time required to anticipate any and all possible integration and reuse efforts, and the impact of that on overall development time and feature sets built, vs that development time and feature sets possible without having to always manage so many considerations, many of which will never pay off.
My real world experience with this kind of thing clearly favors lean methods, and a parallel thing in MCAD, using constraints and equations to define physical geometry, plays out exactly the same way.
On larger scale projects, be it simple programming, or manufacturing, etc... managing this balance is a higher level consideration that will determine the overall success of the project, and it's return on development / engineering efforts.
So again, we are at the purist vs pragmatist place in the discussion, where there is no resolution possible without first framing it all in the context of something material to be done.
It is good to understand both extremes, but they are not means to a greater end, just tools one can use to reason what makes best sense, given the requirements at hand.
Edit: I see the title change now. My post here works for that.
It's been my life experience that when we see a "if only people would" kind of case, it's idealized and not inclusive enough to warrant that level of general consideration; otherwise, people would be doing whatever it was.