I didn't mean to pick on you specifically - I find that code incredibly dense and hard to parse, and I also have difficulty with mathematical notation, so perhaps those are related.
You probably wouldn't like regular expressions, either.
Regarding support for GCC, do you mean financial support? or encouragement? See comments by Ken here
Tubular, thanks for the link to the "General Planning" thread. I read that thread when it was first started, but I had quit reading posts to it. That's what I get for not reading every single post on the forum.
I'm a bit disappointed by Ken's post. I was hoping Parallax would pursue GCC on the P2 sooner rather than later. However, I can understand that Parallax has sunk millions of dollars into P2 development, and they are probably a bit gun-shy about putting more money into it until it starts paying for itself.
I agree with JasonDorie, that line of spin code is incredibly dense and hard to parse, and I have no issues with mathematical notation.
One letter variable names are worthless, packed together without any spacing, doing assignment inline, and 15+ operations all in one line. This is the epitome of unreadable, unmaintainable, undocumented, and horrible code.
If this was in my codebase at work I would make the author redo it, and question why we hired such a person.
You probably wouldn't like regular expressions, either.
I use them, but you're right, I don't *like* them. They're great at what they do, but are very difficult to extract high-level meaning from. That's generally the litmus test for my code - "Can I read this in a year and know what the heck I meant?"
I didn't mean to pick on you specifically - I find that code incredibly dense and hard to parse, and I also have difficulty with mathematical notation, so perhaps those are related.
That is alright. Some people have trouble with expressions, that is how it is.
Though back to Spin2, what do we have in store for Spin3 for the propeller 3, that should only be about 12 years off so time to start thinking about it .
One letter variable names are worthless, packed together without any spacing, doing assignment inline, and 15+ operations all in one line. This is the epitome of unreadable, unmaintainable, undocumented, and horrible code.
That said, in Spin, because of the lack of optimizer, sometimes this is the way you get it to go fast, which sucks. (well, except the 1 letter var names). This is another reason I rail about the need for a proper compiler / optimizer. It eliminates the need to write things that look terrible to perform well.
From my view, any work on Spin at this time is delaying the P2 from going into synthesis. Maybe there is other work going on that I'm not aware of, and the design is preparing to go into synthesis in parallel with other activities.
The 'other work that is going on' is proving the latest FPGA build image.
The final decision to pass P2 verilog into physical layout steps, is going to need careful focus on how much testing coverage has been done.
The latest 'release candidate' FPGA image is as yet only very few days old, Feb 25th I think.
Surely you do not expect Chip to take a Verilog compile, as 100% test coverage, & ship ?
No, I would expect that Chip would be running extensive tests on the FPGA image while everyone else is re-running for the umpteen time the stuff that they have run on previous FPGA images. And then the design would go into synthesis.
I agree with JasonDorie, that line of spin code is incredibly dense and hard to parse, and I have no issues with mathematical notation.
One letter variable names are worthless, packed together without any spacing, doing assignment inline, and 15+ operations all in one line. This is the epitome of unreadable, unmaintainable, undocumented, and horrible code.
If this was in my codebase at work I would make the author redo it, and question why we hired such a person.
Well I guess that is part of why I like assembly. Though you will note that the code in question does not use one char operators except where obvious from the surrounding code. It is also notable that the line of code is much more understandable in the context of the program to which it belongs, which has been changed since that original version to use 4 smaller expressions, even though it adds 7 operations to the execution. And it has been rewritten again in PASM (check the Prop 1 forum for the PASM version posted today).
Either. JMG mentioned only one index register scenario, and from the perspective of hub code, it's not as extreme as it looks.
Ahh, right. Yes, I see the following possible ways to implement stacks:
* Hardware stack
- limited call/return only, no data (practically)
- very efficient
* Hub stack (using PTRx-variant instructions)
- can use for both call/return and data
- largest possible stack sizes
- inefficient
* Hub stack (manually maintain stack pointer)
- can use for data (possibly call/return as well, but awkward and inefficient)
- largest possible stack sizes
- can maintain more than two stacks without swapping out register values
- for data-only stacks, can be byte, word, or long-oriented
- inefficient, though not necessarily worse than PTRx variant
* COG stack (manually maintain stack pointer)
- can use for data (possibly call/return as well, but awkward and inefficient)
- larger stacks than the hardware stack, but much smaller than the hub stack
- can maintain more than two stacks without swapping out register values
- generally long-oriented, though word and byte stacks can be done with some effort
- more efficient than hub stack (either variant) for long-oriented stacks, likely comparable efficient for byte or word stacks
* LUT stack (manually maintain stack pointer)
- can use for data (possibly call/return as well, but awkward and inefficient)
- larger stacks than the hardware stack, but much smaller than the hub stack
- can maintain more than two stacks without swapping out register values
- generally long-oriented, though word and byte stacks can be done with some effort
- more efficient than hub stack (either variant) for long-oriented stacks, less efficient for byte or word stacks
Regarding support for GCC, do you mean financial support? or encouragement? See comments by Ken here
I'm a bit disappointed by Ken's post. I was hoping Parallax would pursue GCC on the P2 sooner rather than later. However, I can understand that Parallax has sunk millions of dollars into P2 development, and they are probably a bit gun-shy about putting more money into it until it starts paying for itself.
I think they key comment is "false r&d start". Ie from a direction and inertia point of view, these things can be crippling, quite aside from financial aspects.
So the question becomes what can/should the forumista community do on C/C++ in the meantime?
Regarding support for GCC, do you mean financial support? or encouragement? See comments by Ken here
I'm a bit disappointed by Ken's post. I was hoping Parallax would pursue GCC on the P2 sooner rather than later. However, I can understand that Parallax has sunk millions of dollars into P2 development, and they are probably a bit gun-shy about putting more money into it until it starts paying for itself.
I think they key comment is "false r&d start". Ie from a direction and inertia point of view, these things can be crippling, quite aside from financial aspects.
So the question becomes what can/should the forumista community do on C/C++ in the meantime?
I think the "false r&d start" likely refers to the work that was done for P2-hot. That work was all done on a volunteer basis as far as I know. Mine was anyway. I've verified with Ken that Parallax is interested in GCC for P2. They just aren't in a position to fund it yet. Since Chip has frozen the Verilog, it might be time to start some new volunteer efforts. I've dived into GAS a bit and will likely do some work on it shortly as long as the instruction set stays frozen. I don't even really care if the Verilog is frozen. It's mostly the instruction set that matters. Anyway, we need GAS before GCC so this is a first step.
I know I am a bit behind on this, though I thought the idea was to use the LUT for a local stack, at least that is what I remember from either shortly before of shortly after the Hot Prop 2 run.
Since Chip has frozen the Verilog, it might be time to start some new volunteer efforts. I've dived into GAS a bit and will likely do some work on it shortly as long as the instruction set stays frozen. I don't even really care if the Verilog is frozen. It's mostly the instruction set that matters. Anyway, we need GAS before GCC so this is a first step.
Sounds really good David. I'm going to work on a simple test/visualisation setup, should have something shortly
Chip talked about using LUT for stack space, but backed off of that recently. I think it's possible the current frame will be in LUT, but the full stack will be in hub for Spin.
I don't think it prevents anyone starting C++, or any language.
(I know TF2 is running on P2 image, if not with very latest opcode tuning )
Looking at the Google sheet for instructions, I do not see any comments on HW Stacks available, and their sizes ?
Ah, sorry. There is an 8-level 22-bit hardware stack in each cog. It's used by the CALL/RET/PUSH/POP instructions.
OK, so a Compiler writer is gong to have to choose early on, what stack approach to use.
8 levels (especially with interrupts in the mix) is not easy to manage in a HLL.
What happens on stack overflow/underflow ?
The propeller cogs do not have a need for the large stacks that conventional architectures need. Since the cogs have general purpose registers there is no need to save a lot of registers, only the PC and status bits. No need to save and restore blocks of registers since interrupt code would have it's own registers to use.
And when in HUBEXEC mode, we basically have a 500 register CPU.
Indexing, pointers, software stacks can all be maintained without a save load push pull to save state. Indexing happens via the AUG instructions, which provide auto increment, decrement, and other common index register type features.
While there is an additional 2 cycle cost for these, note most CPUS that offer similar advanced features do so with an additional cycle count. On P2, these are broken out and can be generally applied as and where needed. The trade appears to be a somewhat larger program, but consistent and predictable execute times.
Seems to me compilers could just setup all they need, stack, pointers, accumulators, and allocate large numbers of registers to make best use of the resources.
A couple small subroutines in the COGEXEC mode may make some operations even faster. Those would appear like fast, complex instructions to a calling HUBEXEC program, for example.
The propeller tool won't compile it like that. It is possible to do essentially the same thing in the propeller tool but a couple of helper functions/objects are needed.
May be a little late, though on ternary operators:
I think that there are enough things where ternary operators are clearer than using an if/else construct or using a case construct to make it worthwhile to include them.
An example in C that is a bit redundant in Spin, though should help the point.
With a ternary operator:
/* Get the minimum of the two values int a and int b, returning value a if equal */
least = ( a > b) ? b : a;
And with an if construct:
/* Get the minimum of the two values int a and int b, returning value a if equal */
if (a > b)
least = b;
else
least = a;
So please do include ternary operators, please.
A second example:
using a ternary operator.
/* Make sure a ascii value is alpha, returning null if not*/
Temp = TestChr & 0x0DF //Test it as upper case, to make only single test range.
TestChr = (Temp >= 65 && Temp <= 90) ? TestChr : 0;
versus using an if construct:
/* Make sure a ascii value is alpha, returning null if not*/
Temp = TestChr & 0x0DF //Test it as upper case, to make only single test range.
if (Temp >= 65 && Temp <= 90)
TestChr = TestChr;
else
TestChr = 0;
I just noticed some documentation for ALTI in Chip's documentation.
Just read it but still don't understand it...
Looks pretty useful, if you can figure it out...
Heater,
You are being stubborn and absurd, in my opinion.
You have this irrational hate for the ternary operator. Everyone else seems to love it or be fine with it. Chip loves it. It's very simple and clean.
You say it's harder to read, but others including myself think it's easier.
You say it's redundant, but there are situations where making the equivalent if construct would be cumbersome. Also, do you feel the same way about while(), do while(), and for()? If so them you are a lost cause.
I am fairly certain New Spin/Spin2 will have the ternary operator, and I plan to make it available for P1 via OpenSpin (as I also plan to make as much of the changes for new spin/spin2 that can, be available for P1).
I use ternary operators, but usually only if the condition and values ares are simple/short. There is one case where I actually prefer the ternary operator: when it replaces a series of 'elseif' statements.
I personally find that more readable and concise than
if cond1
result = value1
elseif cond2
result = value2
elseif cond3
result = value3
else
result = value4
Heater, if your main argument is that ternary operators make code harder to read, I think this is one place where the opposite can be stated. Besides, people can write hard-to-read code in spin regardless of whether there's a ternary operator. Keeping it out won't change that.
Comments
Speaking of ... for Spin2?
-Phil
I'm a bit disappointed by Ken's post. I was hoping Parallax would pursue GCC on the P2 sooner rather than later. However, I can understand that Parallax has sunk millions of dollars into P2 development, and they are probably a bit gun-shy about putting more money into it until it starts paying for itself.
One letter variable names are worthless, packed together without any spacing, doing assignment inline, and 15+ operations all in one line. This is the epitome of unreadable, unmaintainable, undocumented, and horrible code.
If this was in my codebase at work I would make the author redo it, and question why we hired such a person.
I use them, but you're right, I don't *like* them. They're great at what they do, but are very difficult to extract high-level meaning from. That's generally the litmus test for my code - "Can I read this in a year and know what the heck I meant?"
That is alright. Some people have trouble with expressions, that is how it is.
Though back to Spin2, what do we have in store for Spin3 for the propeller 3, that should only be about 12 years off so time to start thinking about it .
That said, in Spin, because of the lack of optimizer, sometimes this is the way you get it to go fast, which sucks. (well, except the 1 letter var names). This is another reason I rail about the need for a proper compiler / optimizer. It eliminates the need to write things that look terrible to perform well.
Ahh, right. Yes, I see the following possible ways to implement stacks:
* Hardware stack
- limited call/return only, no data (practically)
- very efficient
* Hub stack (using PTRx-variant instructions)
- can use for both call/return and data
- largest possible stack sizes
- inefficient
* Hub stack (manually maintain stack pointer)
- can use for data (possibly call/return as well, but awkward and inefficient)
- largest possible stack sizes
- can maintain more than two stacks without swapping out register values
- for data-only stacks, can be byte, word, or long-oriented
- inefficient, though not necessarily worse than PTRx variant
* COG stack (manually maintain stack pointer)
- can use for data (possibly call/return as well, but awkward and inefficient)
- larger stacks than the hardware stack, but much smaller than the hub stack
- can maintain more than two stacks without swapping out register values
- generally long-oriented, though word and byte stacks can be done with some effort
- more efficient than hub stack (either variant) for long-oriented stacks, likely comparable efficient for byte or word stacks
* LUT stack (manually maintain stack pointer)
- can use for data (possibly call/return as well, but awkward and inefficient)
- larger stacks than the hardware stack, but much smaller than the hub stack
- can maintain more than two stacks without swapping out register values
- generally long-oriented, though word and byte stacks can be done with some effort
- more efficient than hub stack (either variant) for long-oriented stacks, less efficient for byte or word stacks
Does that cover it?
I think they key comment is "false r&d start". Ie from a direction and inertia point of view, these things can be crippling, quite aside from financial aspects.
So the question becomes what can/should the forumista community do on C/C++ in the meantime?
Sounds really good David. I'm going to work on a simple test/visualisation setup, should have something shortly
The propeller cogs do not have a need for the large stacks that conventional architectures need. Since the cogs have general purpose registers there is no need to save a lot of registers, only the PC and status bits. No need to save and restore blocks of registers since interrupt code would have it's own registers to use.
Indexing, pointers, software stacks can all be maintained without a save load push pull to save state. Indexing happens via the AUG instructions, which provide auto increment, decrement, and other common index register type features.
While there is an additional 2 cycle cost for these, note most CPUS that offer similar advanced features do so with an additional cycle count. On P2, these are broken out and can be generally applied as and where needed. The trade appears to be a somewhat larger program, but consistent and predictable execute times.
Seems to me compilers could just setup all they need, stack, pointers, accumulators, and allocate large numbers of registers to make best use of the resources.
A couple small subroutines in the COGEXEC mode may make some operations even faster. Those would appear like fast, complex instructions to a calling HUBEXEC program, for example.
Really? I missed that part! Is it documented?
He means the ALTx instructions, particularly ALTI.
Mobile gets me all the time.
Thanks Chip.
Yes, it should have been TEST:=objAddress.
I think that there are enough things where ternary operators are clearer than using an if/else construct or using a case construct to make it worthwhile to include them.
An example in C that is a bit redundant in Spin, though should help the point.
With a ternary operator: And with an if construct:
So please do include ternary operators, please.
A second example:
using a ternary operator. versus using an if construct: Have I made my case yet?
Your case is to add complexity to the syntax and semantics of Spin.
I'm just not into that idea.
With ternary operators: Do I really have to include the if constructs to make the point?
I just noticed some documentation for ALTI in Chip's documentation.
Just read it but still don't understand it...
Looks pretty useful, if you can figure it out...
You are being stubborn and absurd, in my opinion.
You have this irrational hate for the ternary operator. Everyone else seems to love it or be fine with it. Chip loves it. It's very simple and clean.
You say it's harder to read, but others including myself think it's easier.
You say it's redundant, but there are situations where making the equivalent if construct would be cumbersome. Also, do you feel the same way about while(), do while(), and for()? If so them you are a lost cause.
I am fairly certain New Spin/Spin2 will have the ternary operator, and I plan to make it available for P1 via OpenSpin (as I also plan to make as much of the changes for new spin/spin2 that can, be available for P1).
I personally find that more readable and concise than
Heater, if your main argument is that ternary operators make code harder to read, I think this is one place where the opposite can be stated. Besides, people can write hard-to-read code in spin regardless of whether there's a ternary operator. Keeping it out won't change that.