Can you give one example of a large and commonly used program that is written
in asm?
Sadly no, though only because you qualified the question with 'commonly used', for reasons unknown to me most of the products that are well marketed are written in high level languages, and usually contain stupid bugs (how many modern browsers do not have trouble with memory leaks).
And as to portability spanning multiple HW platforms, this is the one good reason for High level languages in a serious project. If it is only portability between OSes on the same platform on the other hand, assembly still rules, as usually you only have to modify a small portion of the code (enter the ifdef), and this can be easily kept separate from the body of the program.
As to the reliability issue, I agree there as well. However, I believe the biggest leap in reliability occurs when you use a language with bounds checking (or even better: autovivification, as in Perl) and automatic garbage collection. Buffer overflows and memory leaks (from poor heap management) are probably two of the biggest contributors to software instability. And I blame this directly on the profusion of programs written in C.
WOW...
And most of the leeks are a result of programmers that rely on the bounds checking of the language that they choose to use. Relying on the language for bounds checking is VERY POOR PRACTICE. It is not that difficult to check an index value to assure that it is in limits of the region being accessed, and the compiler can not always handle that (dynamic memory allocation is a reality). So if you like buggy binaries by all means rely on a languages bounds checking, if you like clean code that actually works do your own bounds checking, regardless of the language you use.
You're mixing memory leaks and buffer overflows in the same pot, when they're two entirely separate issues. Memory leaks occur when no-longer-used data structures do not get returned to the heap, resulting in an ever-increasing memory footprint. This is what automatic garbage collection is designed to overcome. Buffer overflows result from a lack of bounds-checking, whether intrinsic or extrinsic, and result in memory corruption. They do not necessarily entail an ever-increasing memory footprint.
Not all commercial programs are written by star programmers. And even star programmers can make mistakes. Any language that can catch (in the case of buffer overflows) or eliminate (in the case of memory leaks) these oversights is a net positive for the software industry. IMO, it's just plain stupid to have to worry about heap management every time one writes a so-called high-level program. Extrinsic pointer arithmetic and heap management are probably the cause of more software unreliability than any other agents. Any good commercial programming language will relieve the programmer of these onerous chores.
Array bounds checking is the programming equivalent of wearing a seat belt. You don't want to rely on it, but it's good in an emergency. Even in C you often wind up using the n form of many RTL functions (ex strncpy) to avoid a buffer overflow, so the incremental cost isn't great. Garbage collection is OK, but bad programmers still find creative ways to leak memory. I've seen some doozies like collections in the global scope that are only added to.
I am solidly in the "raise the semantic abstraction" camp of programming. The better the language used, the fewer programmers needed to achieve the same level of productivity. Fewer programmers means less coordination costs and higher overall productivity. It means you are less likely to hire someone who will put a collection in the global scope and only add items to it.
Really high level languages often completely abstract the hardware and OS and allow for program portability. This can come in really handy because you develop on cheap hardware and deploy on some expensive fault tolerant monster server.
Phil:
My apology, yes I did mix meanings.
This does not change that relying on a compiler to do your bounds checking and garbage collection is poor practice. By relying on the language you are either going to have quite slow executable object code or not all instances will be caught, either one is always bad. Why do we want faster systems if we are just going to slow them down.
In the case of bounds checking, the generated object code would have to keep tract of every allocation and its size (including ones done in non standard ways) and check every index into any block of mem against the list, even using a well designed tree this is quite slow compared to a simple upper/lower bounds comparison on the index value. Now for the case of garbage collection, once again the generated code would have to maintain a list of all allocations, including those that are done in non standard ways, and know when the code is done with it with out question.
Even in the best of cases you end up slowing things down, and on many systems it is not possible for the compiler to track every system call that may provide memory to your code, and if you do the research to track every system call (and correctly trap it) that could have memory allocation/deallocation side effects this would slow the app to unacceptable levels.
Sadly no, though only because you qualified the question with 'commonly used'
Yes I did, on purpose. Previously you stated that "...in my book, the high level languages are good for small rarely used code..." so I was fishing for the opposite, a large and commonly used program written in assembler. You will be pleased to hear that I can think of one example close to our hearts. The Spin compiler in the Prop tool is written in x86 asm. Mind you that is not "commonly used" on the scale I had in mind.
Yes, large modern programs have memory leaks. However I don't believe that has got anything to do with the language they are written in, high level or assembler. It is a consequence of their being large and complex programs with many objects to take care of, those objects being referenced in many places. It can become hard to see when they are no longer needed and delete them. Or simply forget to do so. This is less of an issue in assembler because no large and complex programs are ever written in assembler:)
And as to portability spanning multiple HW platforms, this is the one good reason for High level languages in a serious project.
Or a reason for high level languages in multitudes of smaller projects/programs. As I said before thank God all Linux and all the hundreds/thousand of programs that often come with it to create a usable OS are written in C/C++. That way I can move my work from x86 to ARM to whatever with ease. Many of those utilities are not so big but combined it's a lot of code.
I'm sure people like Intel would love everything to be in x86 asm so that they have a lock on you forever.
Relying on the language for bounds checking is VERY POOR PRACTICE. It is not that difficult to check an index value to assure that it is in limits of the region being accessed,
I'm in two minds about this.
On the one hand programmers should always check their inputs for range, consistency etc etc Trust nothing coming in. In that way relying on automatic bounds checking is not a good idea.
On the other hand, littering your source code with array bounds checks and value range checks obscures and complicates your source code making it ugly and unreadable.
A funny story about this. I once had a job testing some avionics code in ADA. Well ADA checks array bounds and value ranges and types and everything it can at compile and/or run time. So there can't be any values out range right? Turned out the flight control program I was testing crashed when a certain hardware input was to large.
How could that happen? Sure enough the compiler checks the value of the input for correct range an made an exception if out of range. Well the hardware did not know anything about those ranges. The joke was that it turned out to be hard for the programmers to manually check for input within range as the check itself required using numbers out of range which the compiler did not allow:)
All in all I like to have bounds and range checks in my compiler/run time. You can always turn them off for the tested, debugged and delivered code to get the performance back. I notice that programmers who use languages with such checks spend a lot less time fighting with debuggers to find the same errors.
Automatic garbage collection though. I agree that is a sin.
Heater:
I can see the value in using bounds checking for debugging, Though how can a languages bounds checking know the limits of every allocation?? It is common place to use direct system calls to an OS to allocate mem, the compiler can not know every possible means of allocating memory on every OS it may be used. I often use compilers designed for one OS on another.
I agree that to overdue bounds checking would make code less readable. This is why when accessing a dynamically allocated field you do a single check against a global variable (which stores the size and base) to assure that the size is at least what it should be then either run the rest of the procedure or error. Or for some things have a marked way of knowing were the end of the buffer is (such as is done with strings, in many languages they are terminated by a 0 [this is user bounds checking] and some they have the allocated size stored just before the first element).
When we write an 'int strcpy(char *s, char *d)' routine do not we use something close to:
int strcpy(s,d)
char *s, *d;
{
int cnt;
for (cnt = 0; s[cnt]; d[cnt] = s[cnt], cnt++);
}
This is bounds checking at its best, on the source that is (we are trusting the dest in this case).
And to create the dest 'char *dst', when we do not know the length of the source 'char *s', do not we do something like:
The memory for $string1 is automatically allocated when the assignment is made. When either variable is undef'd or goes out of context, the memory allocated is returned to the heap. Arrays work the same way:
@array1 = @array2
If a string or array grows, more memory is allocated for it automatically. 'Way simple and very productive to write programs in, since there's no malloc to worry about. Moreover, you end up with real variables, not pointers -- unless you want to create references from the variables. But you can't do arithmetic on references: they're not numbers but a distinct, non-arithmetic type.
When we write an 'int strcpy(char *s, char *d)' routine do not we use something close to:
Code:
int strcpy(s,d)
char *s, *d;
{
int cnt;
for (cnt = 0; s[cnt]; d[cnt] = s[cnt], cnt++);
}
This is bounds checking at its best, on the source that is (we are trusting the dest in this case)
So actually the would be bounds checking at it's worst:)
No matter how much we trust dest the input string can always be longer and end up corrupting whatever is next.
By the way that does not copy the null string termination so we have a further potential problem down the line.
I'm sticking with my raise the semantic abstraction meme. So in C I would use
char * newstr = strdup(oldstr);
That way I bypass all the messy off by one and destination buffer size issues. If I had to copy into an existing buffer, I would use strncpy which limits the amount copied to the value of the n argument.
davidsaunders, most heap management schemes require metadata on the length of allocations anyway. So a fast and light bounds checker could re-use that information.
davidsaunders, most heap management schemes require metadata on the length of allocations anyway. So a fast and light bounds checker could re-use that information.
Wow so we check bounds on each allocation first to determine which area the current pointer is within and then check it against the bounds of this area? To me this sounds a bit slow (ok this could be quite fast if we know which area the pointer is supposed to be in and we did our own bounds checking using the metadata, but for the compiler see above).
You mean in C++? You can not use a function call (or any non constant) in a variable declaration in C.
David,
Function calls can be used in a variable declaration in C. It is perfectly valid to do something line "int len = strlen(str);" or "char *ptr = malloc(size);".
Back to your original premise, you believe it is easier to write and maintain large blocks of code in assembly than in a high level language. Is that correct, or do I misunderstand you?
Heater and davidsaunders, the buddy block algorithm is a way to implement dynamic memory allocation in space constrained environments. Another aspect of that algorithm is that finding the block from the pointer is easy and fast. I remember using it back in the 80's on computers with less memory and power than a propeller chip.
Back to your original premise, you believe it is easier to write and maintain large blocks of code in assembly than in a high level language. Is that correct, or do I misunderstand you?
Close, more that it is easier to maintain many small blocks of code assembly, and do module testing in assembler. I have very rarely had a routine exceed 30 instructions (except on the PowerPC, otherwize occasionally on the x86), and in all high level language compilers that I have worked with will produce 50 to 90 instruction routines that are not as well optimized to do the same thing a well optimized 20 to 30 instruction assembly routine does. That and source level debugging rarely works out once compiled, and including debugging symbols (when supported) often causes different behavior than the same program with out debugging symbols, thus more instructions per routine means much much more debugging time on the binary. This is my argument to prefer assembly language if all targets are for the same HW architecture with the same CPU.
davidsaunders, I understand your love of asm. I feel the same way.
To me it is just much easier to understand what is happening in asm
compared to a language like C. Asm is really the simplest way to
program...it just takes longer to get something written because you
have to code using such tiny steps. You eventually end up with a pretty
large library of common asm routines and you simply use these as blocks
to accomplish stuff. It gets to the point where you can get something up
and running almost as fast as someone does in C. The problem is that
almost everyone else looks at your work and finds it mind numbingly
complicated...and that is a bad thing.
If a lot of your asm code is used at a company it does tend to make you
indispensable :-)
I still get calls and emails from my first job asking for clarification on my old
asm code. I have since learned how to do much better documentation (thank goodness)
I now bite my lip and use C almost exclusively, only dropping down to asm when there
is no other way to wring enough juice out of the uC.
When I said malloc is not much use on space constrained or time critical systems I was thinking like this:
1) In a real time system I would really like to know what is the worst case execution time WCET. Otherwise I can not make meaningful statements about whether my system works or not. Some allocators have very poor WCET.
2) In any long running embedded system I would like to know that I am actually going to get the memory I ask for. I don't want the thing to fail in some odd case when memory fragmentation or such causes a malloc to fail. This problem is compounded when I have less memory anyway.
Turns out that these issues are rather hard to get a handle on and have been the subject of research for decades.
My conclusion, formed in the 1980's, has always been that in the face of these unknowns it's best to stay away from using memory allocators in real-time embedded systems. I am not alone in this conclution, here is a quote from the introduction of a research paper written in 2007:
Although dynamic storage allocation (DSA) has been extensively studied, it has not been widely used in real-time systems due to the commonly accepted idea that, because of the intrinsic nature of the problem, it is difficult or even impossible to design an efficient, time-bounded algorithm. An application can request and release blocks of different sizes in a sequence that is, a priori, unknown to the allocator. It is no wonder that the name DSA suggests the idea of unpredictable behaviour.
Well times have changed and I have not been keeping up. I have to thank you for prompting me to look into this again.
Seems that the buddy algorithm whilst providing a bounded temporal cost is somewhat poor when it comes to memory fragmentation.
Turns out we now have at least one memory allocator that is suitable for real-time, space constrained systems. The Two-Level Segregate Fit (TLSF) allocator. Looks like I might have to review my attitude toward allocators in real-time embedded systems code.
You can get TLSF from here: http://rtportal.upv.es/rtmalloc/ . Where you can also find a bunch of papers re: memory allocators. Interesting stuff.
That and source level debugging rarely works out once compiled,
Why not? I have used many different compilers of different languages where source level debuggers worked just fine. With optimizations turned off there is a one to one correspondence between your source lines and resulting instructions.
and including debugging symbols (when supported) often causes different behavior than the same program with out debugging symbols
I have yet to come across a compiler where this is true. Including debug symbols into an executable does not change the actual code emitted by the compiler or executed.
What you might mean is that often one gets the option to compile in a "debug" mode which not only includes debug symbols but also removes any compiler optimizations. Changing the optimisation level of course results in different code output which can have odd consequences I must admit.
Strangely enough years ago, Windows 3.1 days, I had a book about programming for Windows in assembler. Initially I wondered who would be crazy enough to want to do that. Anyway I worked through the examples and it turned out to be easier for me to understand in assembler than those Microsoft Foundation Classes in C++:)
I have yet to come across a compiler where this is true. Including debug symbols into an executable does not change the actual code emitted by the compiler or executed.
I often use hard coded pointers to help optimize when working in C, and there are times that this will change the behavior of a program, especially for those executable formats that put the debugging symbols rite before each functions entry point.
Strangely enough years ago, Windows 3.1 days, I had a book about programming for Windows in assembler. Initially I wondered who would be crazy enough to want to do that. Anyway I worked through the examples and it turned out to be easier for me to understand in assembler than those Microsoft Foundation Classes in C++
I do not think any one with any sanity uses Microsofts Foundation Classes .
Heater, thanks for the pointer to the algorithm. It looks interesting, but it will take time to digest it.
MFC, ugh! I architected two program using MFC and couldn't stand it. It's the MVC pattern by someone who doesn't understand MVC, and takes megabytes to do a hello world. The next program I architected I insisted we use ATL and WTL because they are much less bloated. MS then walked away from ATL and WTL like they do all other technologies.
I believe the correct use of the acronym is "Microsoft's F@#&ing Code". That's the way it was pronounced by anyone I ever heard attempting to learn or debug it.
ElectricAye, you are saved!
I turned the skillet into skill set :-)
Thank you! Honestly, I'm not a spelling nazi, but that little typo was driving me crazy for some reason. My brain simply locked up every time it tried to figure out what a dead skillet might be.
Comments
-Phil
And as to portability spanning multiple HW platforms, this is the one good reason for High level languages in a serious project. If it is only portability between OSes on the same platform on the other hand, assembly still rules, as usually you only have to modify a small portion of the code (enter the ifdef), and this can be easily kept separate from the body of the program.
And most of the leeks are a result of programmers that rely on the bounds checking of the language that they choose to use. Relying on the language for bounds checking is VERY POOR PRACTICE. It is not that difficult to check an index value to assure that it is in limits of the region being accessed, and the compiler can not always handle that (dynamic memory allocation is a reality). So if you like buggy binaries by all means rely on a languages bounds checking, if you like clean code that actually works do your own bounds checking, regardless of the language you use.
You're mixing memory leaks and buffer overflows in the same pot, when they're two entirely separate issues. Memory leaks occur when no-longer-used data structures do not get returned to the heap, resulting in an ever-increasing memory footprint. This is what automatic garbage collection is designed to overcome. Buffer overflows result from a lack of bounds-checking, whether intrinsic or extrinsic, and result in memory corruption. They do not necessarily entail an ever-increasing memory footprint.
Not all commercial programs are written by star programmers. And even star programmers can make mistakes. Any language that can catch (in the case of buffer overflows) or eliminate (in the case of memory leaks) these oversights is a net positive for the software industry. IMO, it's just plain stupid to have to worry about heap management every time one writes a so-called high-level program. Extrinsic pointer arithmetic and heap management are probably the cause of more software unreliability than any other agents. Any good commercial programming language will relieve the programmer of these onerous chores.
That's my story, and I'm stickin' to it!
-Phil
I am solidly in the "raise the semantic abstraction" camp of programming. The better the language used, the fewer programmers needed to achieve the same level of productivity. Fewer programmers means less coordination costs and higher overall productivity. It means you are less likely to hire someone who will put a collection in the global scope and only add items to it.
Really high level languages often completely abstract the hardware and OS and allow for program portability. This can come in really handy because you develop on cheap hardware and deploy on some expensive fault tolerant monster server.
My apology, yes I did mix meanings.
This does not change that relying on a compiler to do your bounds checking and garbage collection is poor practice. By relying on the language you are either going to have quite slow executable object code or not all instances will be caught, either one is always bad. Why do we want faster systems if we are just going to slow them down.
In the case of bounds checking, the generated object code would have to keep tract of every allocation and its size (including ones done in non standard ways) and check every index into any block of mem against the list, even using a well designed tree this is quite slow compared to a simple upper/lower bounds comparison on the index value. Now for the case of garbage collection, once again the generated code would have to maintain a list of all allocations, including those that are done in non standard ways, and know when the code is done with it with out question.
Even in the best of cases you end up slowing things down, and on many systems it is not possible for the compiler to track every system call that may provide memory to your code, and if you do the research to track every system call (and correctly trap it) that could have memory allocation/deallocation side effects this would slow the app to unacceptable levels.
Yes I did, on purpose. Previously you stated that "...in my book, the high level languages are good for small rarely used code..." so I was fishing for the opposite, a large and commonly used program written in assembler. You will be pleased to hear that I can think of one example close to our hearts. The Spin compiler in the Prop tool is written in x86 asm. Mind you that is not "commonly used" on the scale I had in mind.
Yes, large modern programs have memory leaks. However I don't believe that has got anything to do with the language they are written in, high level or assembler. It is a consequence of their being large and complex programs with many objects to take care of, those objects being referenced in many places. It can become hard to see when they are no longer needed and delete them. Or simply forget to do so. This is less of an issue in assembler because no large and complex programs are ever written in assembler:)
Or a reason for high level languages in multitudes of smaller projects/programs. As I said before thank God all Linux and all the hundreds/thousand of programs that often come with it to create a usable OS are written in C/C++. That way I can move my work from x86 to ARM to whatever with ease. Many of those utilities are not so big but combined it's a lot of code.
I'm sure people like Intel would love everything to be in x86 asm so that they have a lock on you forever.
I'm in two minds about this.
On the one hand programmers should always check their inputs for range, consistency etc etc Trust nothing coming in. In that way relying on automatic bounds checking is not a good idea.
On the other hand, littering your source code with array bounds checks and value range checks obscures and complicates your source code making it ugly and unreadable.
A funny story about this. I once had a job testing some avionics code in ADA. Well ADA checks array bounds and value ranges and types and everything it can at compile and/or run time. So there can't be any values out range right? Turned out the flight control program I was testing crashed when a certain hardware input was to large.
How could that happen? Sure enough the compiler checks the value of the input for correct range an made an exception if out of range. Well the hardware did not know anything about those ranges. The joke was that it turned out to be hard for the programmers to manually check for input within range as the check itself required using numbers out of range which the compiler did not allow:)
All in all I like to have bounds and range checks in my compiler/run time. You can always turn them off for the tested, debugged and delivered code to get the performance back. I notice that programmers who use languages with such checks spend a lot less time fighting with debuggers to find the same errors.
Automatic garbage collection though. I agree that is a sin.
I can see the value in using bounds checking for debugging, Though how can a languages bounds checking know the limits of every allocation?? It is common place to use direct system calls to an OS to allocate mem, the compiler can not know every possible means of allocating memory on every OS it may be used. I often use compilers designed for one OS on another.
I agree that to overdue bounds checking would make code less readable. This is why when accessing a dynamically allocated field you do a single check against a global variable (which stores the size and base) to assure that the size is at least what it should be then either run the rest of the procedure or error. Or for some things have a marked way of knowing were the end of the buffer is (such as is done with strings, in many languages they are terminated by a 0 [this is user bounds checking] and some they have the allocated size stored just before the first element).
When we write an 'int strcpy(char *s, char *d)' routine do not we use something close to: This is bounds checking at its best, on the source that is (we are trusting the dest in this case).
And to create the dest 'char *dst', when we do not know the length of the source 'char *s', do not we do something like: Again we are doing are own bounds checking.
Yes, my oversight
The memory for $string1 is automatically allocated when the assignment is made. When either variable is undef'd or goes out of context, the memory allocated is returned to the heap. Arrays work the same way:
If a string or array grows, more memory is allocated for it automatically. 'Way simple and very productive to write programs in, since there's no malloc to worry about. Moreover, you end up with real variables, not pointers -- unless you want to create references from the variables. But you can't do arithmetic on references: they're not numbers but a distinct, non-arithmetic type.
-Phil
So actually the would be bounds checking at it's worst:)
No matter how much we trust dest the input string can always be longer and end up corrupting whatever is next.
By the way that does not copy the null string termination so we have a further potential problem down the line.
char * newstr = strdup(oldstr);
That way I bypass all the messy off by one and destination buffer size issues. If I had to copy into an existing buffer, I would use strncpy which limits the amount copied to the value of the n argument.
davidsaunders, most heap management schemes require metadata on the length of allocations anyway. So a fast and light bounds checker could re-use that information.
Function calls can be used in a variable declaration in C. It is perfectly valid to do something line "int len = strlen(str);" or "char *ptr = malloc(size);".
Back to your original premise, you believe it is easier to write and maintain large blocks of code in assembly than in a high level language. Is that correct, or do I misunderstand you?
Dave
To me it is just much easier to understand what is happening in asm
compared to a language like C. Asm is really the simplest way to
program...it just takes longer to get something written because you
have to code using such tiny steps. You eventually end up with a pretty
large library of common asm routines and you simply use these as blocks
to accomplish stuff. It gets to the point where you can get something up
and running almost as fast as someone does in C. The problem is that
almost everyone else looks at your work and finds it mind numbingly
complicated...and that is a bad thing.
If a lot of your asm code is used at a company it does tend to make you
indispensable :-)
I still get calls and emails from my first job asking for clarification on my old
asm code. I have since learned how to do much better documentation (thank goodness)
I now bite my lip and use C almost exclusively, only dropping down to asm when there
is no other way to wring enough juice out of the uC.
When I said malloc is not much use on space constrained or time critical systems I was thinking like this:
1) In a real time system I would really like to know what is the worst case execution time WCET. Otherwise I can not make meaningful statements about whether my system works or not. Some allocators have very poor WCET.
2) In any long running embedded system I would like to know that I am actually going to get the memory I ask for. I don't want the thing to fail in some odd case when memory fragmentation or such causes a malloc to fail. This problem is compounded when I have less memory anyway.
Turns out that these issues are rather hard to get a handle on and have been the subject of research for decades.
My conclusion, formed in the 1980's, has always been that in the face of these unknowns it's best to stay away from using memory allocators in real-time embedded systems. I am not alone in this conclution, here is a quote from the introduction of a research paper written in 2007:
Well times have changed and I have not been keeping up. I have to thank you for prompting me to look into this again.
Seems that the buddy algorithm whilst providing a bounded temporal cost is somewhat poor when it comes to memory fragmentation.
Turns out we now have at least one memory allocator that is suitable for real-time, space constrained systems. The Two-Level Segregate Fit (TLSF) allocator. Looks like I might have to review my attitude toward allocators in real-time embedded systems code.
You can get TLSF from here: http://rtportal.upv.es/rtmalloc/ . Where you can also find a bunch of papers re: memory allocators. Interesting stuff.
Why not? I have used many different compilers of different languages where source level debuggers worked just fine. With optimizations turned off there is a one to one correspondence between your source lines and resulting instructions.
I have yet to come across a compiler where this is true. Including debug symbols into an executable does not change the actual code emitted by the compiler or executed.
What you might mean is that often one gets the option to compile in a "debug" mode which not only includes debug symbols but also removes any compiler optimizations. Changing the optimisation level of course results in different code output which can have odd consequences I must admit.
Strangely enough years ago, Windows 3.1 days, I had a book about programming for Windows in assembler. Initially I wondered who would be crazy enough to want to do that. Anyway I worked through the examples and it turned out to be easier for me to understand in assembler than those Microsoft Foundation Classes in C++:)
MFC, ugh! I architected two program using MFC and couldn't stand it. It's the MVC pattern by someone who doesn't understand MVC, and takes megabytes to do a hello world. The next program I architected I insisted we use ATL and WTL because they are much less bloated. MS then walked away from ATL and WTL like they do all other technologies.
Holly, could you you please break up that "word" into skill set before I lose my mind?
I turned the skillet into skill set :-)
Thank you! Honestly, I'm not a spelling nazi, but that little typo was driving me crazy for some reason. My brain simply locked up every time it tried to figure out what a dead skillet might be.
Here is a 'deadly skillet' :-)
BTW, "Tangled" is a really cute movie! :-)
(I almost chose this image as my avatar, but it seemed a bit hostile)