I just noticed some documentation for AUGI in Chip's documentation.
Just read it but still don't understand it...
Looks pretty useful, if you can figure it out...
Ha! You posted this while I was typing my previous reply. It's a deal. Forget my last post.
Now, could we please have GOTO in Spin? It make things much easier to read in many cases.
Joking...honest.
I hope you are joking, I have only ever once found a good use for goto.
Though you do have to admit that the argument of redundant does not hold water, unless you want to replace all the different loops with just a simple while loop and do the rest yourself, or if you wish to replace the case statements with only using if statements.
... unless you call it "JMP". Then it's okay to use. :-D
LOL. It is true that it is the same thing, though that is needed to implement all the high level stuff that gets rid of the GOTO statement.
Technically you could have a good structured language where there are no loop instructions at all, instead you use GOTO, though all else is the same. That would make a language wherein the only required flow control comes from procedure calls/returns and the lonely IF statement.
If one writes a case carefully, will compilers optimize to that degree?
Yes - Compilers will do lots of fun things with case statements (good ones, at least).
If your cases are 0...N, almost all compilers convert it to a simple jump table. If your cases are N..(N+M) most compilers can subtract the constant offset and use a jump table. If your cases are non-sequential, a good compiler converts the code into a set of binary partitioning IF/ELSE statements, whereas a basic one will just do a sequence of linear IF/ELSE's.
I have always thought ON value GOTO should be in with the other structures.
What do you mean?
Given that it's the "GOTO randomLabel" that breaks the idea of "structured programming" in a language, how is "ON value GOTO randomLable" any more acceptable?
Wish I had never mentioned "GOTO", even as a joke. Now we have to squeeze every last worm out of the can
@davidsaunders
Technically you could have a good structured language where there are no loop instructions at all, instead you use GOTO,
Technically I think you cannot. Unless you have redefined how GOTO works. Or what you mean by "structured".
Yeah, loops are not required (As in, not even made with GOTO).
@everyone
When the guys designing the language, and more importantly actually implementing it, really want a feature then it's time to give up any objections to whatever features.
As I said "I agree to disagree". Let's move on to the next case.
Heater, that it reduces down to the ugly if then go to random must be the reason. Makes sense. Didn't really see that.
Again, I wasn't asking for it. Don't care, but it was cool to learn well written case or switch constructs do get optimized. Heck, anyone who wants this stuff can just add it.
Not about that.
I feared that long chain of if then statements like SPIN does. Maybe we can target that in our compiler work.
It's a big gain especially given a byte code target.
For systems level programming, goto gets used like we see in assembly. The gains and needed program flow warrant doing that. Our stuff isn't often in that scope. There is PASM for when it is. Perfect, in my view, given we optimize the sweet cases.
For systems level programming, goto gets used like we see in assembly.
Yes indeed. You will see a lot of goto in the Linux kernel.
But not really as used like JMP in assembly.
The idea is that your normally expected program flow is handled with proper "structured" if/do/while blocks etc. So that the work you want to do is comprehensible.
But exceptional circumstances can cause a bail out to some clean up code with goto.
Linus Torvalds makes a very good case for this style of programming in C.
Kind of like use of exceptions in C++ but neater and faster and more predictable.
They are compiled binary gunk that gets the job done. As you specified in the source code. And as twisted and mangled by your optimizing compiler.
As an extreme example. Imagine you have some long and complicated function in your source that takes a few parameters.
Perhaps the compiler figures out that you only ever call that function with some constants as parameters in your program.
Boom! It can compute the whole thing at compile time.
The resulting executable has no trace of your long and complicated algorithm in it!
That is extreme, but it's been many years now that when stepping through code in a debugger was very confusing. What you wrote is not in the executable! Turn off optimization if you want to debug it like that.
I mean the net gain and robust behavior. How that gets done isn't a primary concern. Let the gunk be gunk.
If it can be specified in a case or switch and work like a math driven branch. Fine by me. I don't want to be sorting cases and profiling when I know the optimal one works fast no matter how many cases there are.
Or, of course, it can be written explicitly using assembly language.
Technically you could have a good structured language where there are no loop instructions at all, instead you use GOTO, though all else is the same. That would make a language wherein the only required flow control comes from procedure calls/returns and the lonely IF statement.
Well, technically you could have a language where the only flow control comes from function calls, period -- GOTO is completely unnecessary. LISP kind of acts like this (although IF is a special form). An even more extreme example is the LazyK language, which doesn't even have an explicit IF and can be implemented with just two builtin functions K and S such that Kxy = x and Sxyz = (x y)(x z). This language is Turing complete, and there's an implementation for the Propeller (https://github.com/totalspectrum/proplazyk). The Turing completeness isn't just theoretical; someone has written a version of the old Colossal Caves Adventure text game in the similar Unlambda language, and there's an Unlambda interpreter written in LazyK, so in principle one could play Colossal Caves with LazyK. The memory requirements for it exceed what's available on the Propeller, alas (probably by many orders of magnitude).
Spin already has a ternary operator. CSPIN will convert the following C code
int sub1(int x, int y, int z)
{
return x < 5 ? y : z;
}
to
PUB sub1(x, y, z)
return lookupz(-((x < 5) == 0) : y, z)
This could be made more efficient by rewriting it as lookupz(-(x => 5) : y, z). Or maybe a special version of lookup could be added that accepts values of 0 or nonzero as the first argument. It would look like lookupl(x < 5: y, z).
There is a reason why structured, readable, portable, code, to be handed down through the generations is desirable.
Just read that story, good read, and sounds like Mel was one of those people that would dream in the instruction set of the computers they use, the type of programmer that could do anything in the absolute most optimal way always in the machines own language. In other words it sounds like Mel was an example of the true perfection of programmers, that we all would love to one day know, and could never hope to match.
Spin already has a ternary operator. CSPIN will convert the following C code
int sub1(int x, int y, int z)
{
return x < 5 ? y : z;
}
to
PUB sub1(x, y, z)
return lookupz(-((x < 5) == 0) : y, z)
This could be made more efficient by rewriting it as lookupz(-(x => 5) : y, z). Or maybe a special version of lookup could be added that accepts values of 0 or nonzero as the first argument. It would look like lookupl(x < 5: y, z).
Now that is part of spin I did not know. Wow, that could be useful for many things, especially if a version of Spin ever gets procedure pointers.
Hardware stack is:
* 22-bits wide (PC,C,Z)
* 8 levels deep
* No indication full or empty
Hub stack:
* Uses PTRA/PTRB
* Can be used for both calls/returns and data
* Relatively slow for multiple back-to-back stack operations
* If the stack contains calls/returns, only grows upwards in memory
* If the stack contains only data, can grow either upward or downward in memory.
Actually, that last bit for the Hub stack points towards a minor optimization. If CALLA and PUSHA performed a PTRA-- instead, then back-to-back calls will be faster (a couple clock cycles per call).
Thinking some more about this, the Local hardware stack is used always by Interrupts, right ?
Users may want to reserve one, maybe 2 levels for Debug, which leaves 7 to be split between Interrupt & Non INT codes, as something like 4+3 or 5+2
That 8 levels is sounding quite light - is this a RAM cell that can be expanded, and what silicon area does it currently use ?
There is also a case for 32b wide push/pop, as that allows better debugger design. You can save/restore registers without needing to use any other chip resource.
Well, if interrupts are being used. We do have 16 concurrent and independent processors.
That's 8x 16 levels and what? 16 X 16 discrete events?
If they are not being used, or a mixed model is, say for USB and display on a few COGS, that stuff is very nicely compartmentalized from other code.
People have a lot of safe and easy options long before the stacks see overload, risky conditions.
Debug offers one COG shadow register to be used to fetch state cleanly too. The whole COG can be dumped to HUB. It's available during debug ISR. A leave no tracks debug is possible.
Debug offers one COG shadow register to be used to fetch state cleanly too. The whole COG can be dumped to HUB. It's available during debug ISR. A leave no tracks debug is possible.
Where are the docs, and examples of this "leave no tracks debug" ?
Can such a debug inspect the stack & show the stack pointer ?
For interrupts, no stack is used. Instead, CALLD's are used on register sets:
INT1 uses $1F4 as the jump address and $1F5 as the return address.
INT2 uses $1F2 as the jump address and $1F3 as the return address.
INT3 uses $1F0 as the jump address and $1F1 as the return address.
The hidden debug interrupt uses INA as the jump address and INB as the return address. These registers are normally read-only, but become RAM during a debug interrupt.
Comments
Clearly that is harder for a beginner. Clearly it is. It does nothing we cannot do already. Not so much, they are already in the language. Ah, I see. Different opinions are "absurd" and "irrational"
Looks like I'm out voted here and it's Chip's call for his language anyway.
As I said, we have to agree to disagree and move on.
Ha! You posted this while I was typing my previous reply. It's a deal. Forget my last post.
Now, could we please have GOTO in Spin? It make things much easier to read in many cases.
Joking...honest.
Did you mean ALTI?
Though you do have to admit that the argument of redundant does not hold water, unless you want to replace all the different loops with just a simple while loop and do the rest yourself, or if you wish to replace the case statements with only using if statements.
Not requesting it be in SPIN, just a point of curiosity.
We have case statements for that, but they are inefficient and cumbersome compared to what is basically a math driven HLL jump table.
To me, it seemed structured, with specific purpose, not just a mess potential.
If one writes a case carefully, will compilers optimize to that degree?
I would agree with one change. More like how it is done in BASIC's that came after 1982, that is in the form ON value statement, <statement,>
Where the statements could as easy be function/procedure calls or anything else.
LOL. It is true that it is the same thing, though that is needed to implement all the high level stuff that gets rid of the GOTO statement.
Technically you could have a good structured language where there are no loop instructions at all, instead you use GOTO, though all else is the same. That would make a language wherein the only required flow control comes from procedure calls/returns and the lonely IF statement.
Yes - Compilers will do lots of fun things with case statements (good ones, at least).
If your cases are 0...N, almost all compilers convert it to a simple jump table. If your cases are N..(N+M) most compilers can subtract the constant offset and use a jump table. If your cases are non-sequential, a good compiler converts the code into a set of binary partitioning IF/ELSE statements, whereas a basic one will just do a sequence of linear IF/ELSE's.
Given that it's the "GOTO randomLabel" that breaks the idea of "structured programming" in a language, how is "ON value GOTO randomLable" any more acceptable?
Wish I had never mentioned "GOTO", even as a joke. Now we have to squeeze every last worm out of the can
@davidsaunders Technically I think you cannot. Unless you have redefined how GOTO works. Or what you mean by "structured".
Yeah, loops are not required (As in, not even made with GOTO).
@everyone
When the guys designing the language, and more importantly actually implementing it, really want a feature then it's time to give up any objections to whatever features.
As I said "I agree to disagree". Let's move on to the next case.
Again, I wasn't asking for it. Don't care, but it was cool to learn well written case or switch constructs do get optimized. Heck, anyone who wants this stuff can just add it.
Not about that.
I feared that long chain of if then statements like SPIN does. Maybe we can target that in our compiler work.
It's a big gain especially given a byte code target.
For systems level programming, goto gets used like we see in assembly. The gains and needed program flow warrant doing that. Our stuff isn't often in that scope. There is PASM for when it is. Perfect, in my view, given we optimize the sweet cases.
But not really as used like JMP in assembly.
The idea is that your normally expected program flow is handled with proper "structured" if/do/while blocks etc. So that the work you want to do is comprehensible.
But exceptional circumstances can cause a bail out to some clean up code with goto.
Linus Torvalds makes a very good case for this style of programming in C.
Kind of like use of exceptions in C++ but neater and faster and more predictable.
When indicated, doing that is fast and robust.
Executables should reflect that, somehow. That's all.
They are compiled binary gunk that gets the job done. As you specified in the source code. And as twisted and mangled by your optimizing compiler.
As an extreme example. Imagine you have some long and complicated function in your source that takes a few parameters.
Perhaps the compiler figures out that you only ever call that function with some constants as parameters in your program.
Boom! It can compute the whole thing at compile time.
The resulting executable has no trace of your long and complicated algorithm in it!
That is extreme, but it's been many years now that when stepping through code in a debugger was very confusing. What you wrote is not in the executable! Turn off optimization if you want to debug it like that.
If it can be specified in a case or switch and work like a math driven branch. Fine by me. I don't want to be sorting cases and profiling when I know the optimal one works fast no matter how many cases there are.
Or, of course, it can be written explicitly using assembly language.
Did you ever read the Story of Mel? http://www.pbm.com/~lindahl/mel.html
There is a reason why structured, readable, portable, code, to be handed down through the generations is desirable.
Well, technically you could have a language where the only flow control comes from function calls, period -- GOTO is completely unnecessary. LISP kind of acts like this (although IF is a special form). An even more extreme example is the LazyK language, which doesn't even have an explicit IF and can be implemented with just two builtin functions K and S such that Kxy = x and Sxyz = (x y)(x z). This language is Turing complete, and there's an implementation for the Propeller (https://github.com/totalspectrum/proplazyk). The Turing completeness isn't just theoretical; someone has written a version of the old Colossal Caves Adventure text game in the similar Unlambda language, and there's an Unlambda interpreter written in LazyK, so in principle one could play Colossal Caves with LazyK. The memory requirements for it exceed what's available on the Propeller, alas (probably by many orders of magnitude).
Eric
Mike
look[up/down][z] is a rather expensive set of bytecode though.
Just read that story, good read, and sounds like Mel was one of those people that would dream in the instruction set of the computers they use, the type of programmer that could do anything in the absolute most optimal way always in the machines own language. In other words it sounds like Mel was an example of the true perfection of programmers, that we all would love to one day know, and could never hope to match.
Mel doesn't have anything to do with this particular construct..
I see now the exception though. No worries. It's a curio.
Thinking some more about this, the Local hardware stack is used always by Interrupts, right ?
Users may want to reserve one, maybe 2 levels for Debug, which leaves 7 to be split between Interrupt & Non INT codes, as something like 4+3 or 5+2
That 8 levels is sounding quite light - is this a RAM cell that can be expanded, and what silicon area does it currently use ?
There is also a case for 32b wide push/pop, as that allows better debugger design. You can save/restore registers without needing to use any other chip resource.
That's 8x 16 levels and what? 16 X 16 discrete events?
If they are not being used, or a mixed model is, say for USB and display on a few COGS, that stuff is very nicely compartmentalized from other code.
People have a lot of safe and easy options long before the stacks see overload, risky conditions.
Debug offers one COG shadow register to be used to fetch state cleanly too. The whole COG can be dumped to HUB. It's available during debug ISR. A leave no tracks debug is possible.
Can such a debug inspect the stack & show the stack pointer ?
INT1 uses $1F4 as the jump address and $1F5 as the return address.
INT2 uses $1F2 as the jump address and $1F3 as the return address.
INT3 uses $1F0 as the jump address and $1F1 as the return address.
The hidden debug interrupt uses INA as the jump address and INB as the return address. These registers are normally read-only, but become RAM during a debug interrupt.
It's all in the Google Doc.