It's ok, Leon. As you said: Chip asked, what is going in the world and XMOS like all other X'ses like Infineon XMC or the X-ARMs have to be taken in to account. In the end the account sums up to zero. If not exactly zero, x-account / P2-account is close to zero ;-)
I'm rooting for the RISC-V concept but I have not followed it's progress much this year.
Last I checked lowrisc was hoping to get test chips out this year and perhaps something available for us in 2017.
The minions are a bit of a mystery but an early document suggests they want:
...creation of software-defined I/O interfaces.
...inefficiencies of crude bit-banging implementations will be avoided by providing additional hardware support in the I/O shim to reduce the number of low-level operations and the need to constantly poll inputs. Specialised ISA extensions will also aid the minions, e.g. by providing low-cost precise timing support. The ideal division of support between ISA and shim is currently under investigation.
...dedicated low-latency interconnect, such as register-mapped FIFOs, between
the main cores and the minions
Since it appears we're having a cathartic moment, I'd like to throw a marketing suggestion into the mix . . .
.
1) Ditch the name "Propeller" for the Prop2. Propellers are "old" technology in aviation (who wants to buy a ticket on a propeller powered airplane?) and at first blush that is what the name connotes. Perhaps "Turbine" if you need to stick with the "rotational" theme.
2) Ditch the goofy "Jughead (from Archie comics) Hat" logo. Please . . .
Since the P2 chip isn't out yet, Parallax has some time to hire a professional marketing company and make a clean break when the new chip comes out. . .
Heater - I'll check that out. It will be interesting to see if this make its way into a future Raspberry Pi. The Pi has run into that 1 GB VideoCore limit (which I don't understand), Broadcom has been taken over by Avago which is doing a lot of cutting (does Hock care at all about the Pi?), and there have been those statements "Raspberry Pi for adults."
Ha, ha! On the raspi forum, whenever there is some wild speculation about a future Pi, I suggest that the Pi 4 or whatever will be a RISC-V. After all, lowrisc is being developed across the road in Cambridge, these guys all went to Cambridge Uni they know each other.
It all makes sense to me. RS and Farnel have done well out of the 10 million Raspis sold, they may well put up the cash to get lowrisc chips made for the next Pi.
What is missing of course is an open source GPU for the RISC-V SoC. But that is in hand as well http://miaowgpu.org/
I wish Parallax would stop calling their cores "COGs". :-)
Agreed, we will switch to the name "cores" provided Chip agrees.
Ken Gracey
No no no. That's not what I meant. I was just pointing out that we use non-standard terms as well. I was more suggesting that we cut both XMOS and Parallax some slack and let them name things the way they want and instead evaluate the products based on their technical merits rather than whether we agree about the names.
Agreed, we will switch to the name "cores" provided Chip agrees.
Ken Gracey
No no no. That's not what I meant. I was just pointing out that we use non-standard terms as well. ...
I do agree with Ken, and you do point out COG is a non-standard term.
Because CORE is very widely understood, and a standard term, (even RaspPi users are exposed to that), it makes good sense to release a new device, using that term.
Saves thousands of accumulated hours in explaining...
Just received an email from Microchip. Thought this might fit in with this conversation. They now offer cloud based IDE for programming. No download required. I don't use Microchip but thought this might interest some here.
Agreed, we will switch to the name "cores" provided Chip agrees.
Ken Gracey
No no no. That's not what I meant. I was just pointing out that we use non-standard terms as well. ...
I do agree with Ken, and you do point out COG is a non-standard term.
Because CORE is very widely understood, and a standard term, (even RaspPi users are exposed to that), it makes good sense to release a new device, using that term.
Saves thousands of accumulated hours in explaining...
Well, cog is a non-standard term. I'm not fond of it myself, but what is done is done, IMHO. I now consider it a Propeller trademark.
Just received an email from Microchip. Thought this might fit in with this conversation. They now offer cloud based IDE for programming. No download required. I don't use Microchip but thought this might interest some here.
Just received an email from Microchip. Thought this might fit in with this conversation. They now offer cloud based IDE for programming. No download required. I don't use Microchip but thought this might interest some here.
Interesting. Fine for the non-parnoid, or those who do not care about version control
I see it cannot resist being adware...
You have compiled in FREE mode.
Using Omnicient Code Generation that is available in PRO mode,
you could have produced up to 60% smaller and 400% faster code.
but the left pane reports
Code : Free : 5312
Code : Pro : 5199
Data: Free : 801
Data: Pro : 799
but I do now see the weasel words 'up to', so I guess 2.12% smaller code and 0.25% less data is some small way towards 60%...
Code seems rather large for main.c and LED_Array.c, and I cannot find MAP or listing report files in the project ?
That said, it was easy to get some changes done.
What is that IDE coded in ?
... You have compiled in FREE mode.
Using Omnicient Code Generation that is available in PRO mode,
you could have produced up to 60% smaller and 400% faster code.
but the left pane reports
Code : Free : 5312
Code : Pro : 5199
Data: Free : 801
Data: Pro : 799
...
I must say that the MPlabX version that you install on the PC claims the same thing. You can always "optimize" the code by using numerous tricks (I don't remember a single one at the moment) when you write your code. In the end, you will shave a few bytes.
I use MBLab on a daily basis ... no thanks to the cloud version, I'm fine with what is installed on my PC. Not that I am paranoid, but I don't want to be "connected" to do what I need to get done. Some of our clients have very strict foreign policy rules about this sort of thing, especially if I am in the field and need to make a code adjustment/update on a particular machine.
I talked to Andre LaMothe today (Propeller Hydra) about the state of things. He says ARMs have taken over the world and they are quite cheap. In the last decade, they've certainly received a few thousand man-years of engineering effort. Some now come with quad graphic accelerators and every peripheral you could ask for - several of each, in some cases. ARM has also unified a lot of the libraries (core-related) and provided framework for vendor-made peripheral library support. Lots of usable code is available for ARM's. Nothing is going to compete in that arena for the foreseeable future.
For the Prop2, I think it needs to be pitched as something quite different. It's good at I/O and multi-processing to support I/O. Its strengths cannot be exploited very well under an ARM-like approach to development, where compiled C code talks to hardware peripherals that deal with real-time phenomena - unless libraries of PASM code can bridge that gap.
From what Heater has said, along with Andre, we would do well to get a C-Lang (maybe) code generator working, so that we could get JavaScript and Python running quickly, along with C. It would be really interesting to see what, exactly, was done to make C-lang support a particular MCU. I know many of you have been clanging about this for years.
I talked to Andre LaMothe today (Propeller Hydra) about the state of things. He says ARMs have taken over the world and they are quite cheap. In the last decade, they've certainly received a few thousand man-years of engineering effort. Some now come with quad graphic accelerators and every peripheral you could ask for - several of each, in some cases. ARM has also unified a lot of the libraries (core-related) and provided framework for vendor-made peripheral library support. Lots of usable code is available for ARM's. Nothing is going to compete in that arena for the foreseeable future.
I checked a few samples of Andre LaMothe book - apparently the major microcontroller used (Ubicom SX) is not in the market anymore, and sold very expensive in very small quantities. Since I only read the samples, I might have missed out on other things mentioned in the book.
Since the ARM microcontrollers are all around the place, is there an effort by him to use a commercially available ARM microcontroller (which is purchasable from element14 or RS or other nearest retailer) to drive VGA or composite video signals, or to process sound, inside these books?
I talked to Andre LaMothe today (Propeller Hydra) about the state of things. He says ARMs have taken over the world and they are quite cheap. In the last decade, they've certainly received a few thousand man-years of engineering effort. Some now come with quad graphic accelerators and every peripheral you could ask for - several of each, in some cases. ARM has also unified a lot of the libraries (core-related) and provided framework for vendor-made peripheral library support. Lots of usable code is available for ARM's. Nothing is going to compete in that arena for the foreseeable future.
I checked a few samples of Andre LaMothe book - apparently the major microcontroller used (Ubicom SX) is not in the market anymore, and sold very expensive in very small quantities. Since I only read the samples, I might have missed out on other things mentioned in the book.
Since the ARM microcontrollers are all around the place, is there an effort by him to use a commercially available ARM microcontroller (which is purchasable from element14 or RS or other nearest retailer) to drive VGA or composite video signals, or to process sound, inside these books?
His older books were about making video games from SX and Propeller chips. His new work is all ARM-centric.
By the way, the SX chips were not that expensive (~$2).
Why did ARM win the battle? Not: is ARM good or not. They could draw a critical number of users on them. And now developers create applications in the form of pcbs, systems on chip, etc. ARM is a building block. Thinking this way, I see the COG as a unified building block. when I do motor control, I create a system on chip, actually. one cog creates PWM and syncs the ADC counters, another one is the ADC by reading and evaluting counters, then uart to a pc, HMI by running a screen and interfacing mouse and keyboard. Yes, it is a system on chip. I even could do this trick: the adc feedbackpulses compensate the anolog input. So if I rc-filter these pulse, I get an anolog voltage inverted to the input. This voltage I can feed to another adc and so substract one voltage from another. Without analog chips. You can even do a multiplier this way. With P2 we have the same, only multiplied! So the prop creates universality. Parallax: equip your genius. Propeller: empower your genius
I talked to Andre LaMothe today (Propeller Hydra) about the state of things. He says ARMs have taken over the world and they are quite cheap. In the last decade, they've certainly received a few thousand man-years of engineering effort. Some now come with quad graphic accelerators and every peripheral you could ask for - several of each, in some cases. ARM has also unified a lot of the libraries (core-related) and provided framework for vendor-made peripheral library support. Lots of usable code is available for ARM's. Nothing is going to compete in that arena for the foreseeable future.
For the Prop2, I think it needs to be pitched as something quite different. It's good at I/O and multi-processing to support I/O. Its strengths cannot be exploited very well under an ARM-like approach to development, where compiled C code talks to hardware peripherals that deal with real-time phenomena - unless libraries of PASM code can bridge that gap.
From what Heater has said, along with Andre, we would do well to get a C-Lang (maybe) code generator working, so that we could get JavaScript and Python running quickly, along with C. It would be really interesting to see what, exactly, was done to make C-lang support a particular MCU. I know many of you have been clanging about this for years.
Using CLang and LLVM is probably a good idea. Unfortunately, it's pretty much starting over from what we did with PropGCC. I guess the library work could be carried forward if we use a similar ABI but all of the GCC compiler work will have to be scrapped. This is a good time to do it though since we haven't started the P2 GCC work yet. We should probably make a decision on this fairly soon. Do you have a resource in mind to do the CLang / LLVM work or do you want the old PropGCC team to look into it?
Sorry I lost my example but a few years back I made a simple comparison of the same functionality written in C++ with classes and objects and in C with just structs and functions. I was amazed to find that they both compiled to exactly the same code, byte for byte!
You do have to not use the C++ standard library. Don't use exceptions. Probably don't want to use much in the way of inheritance, virtual methods etc.
Basically, C++ obviously bloats out your code if you ask it to. Note that the tiny Arduino is programmed in C++.
Sorry I lost my example but a few years back I made a simple comparison of the same functionality written in C++ with classes and objects and in C with just structs and functions. I was amazed to find that they both compiled to exactly the same code, byte for byte!
You do have to not use the C++ standard library. Don't use exceptions. Probably don't want to use much in the way of inheritance, virtual methods etc.
Basically, C++ obviously bloats out your code if you ask it to. Note that the tiny Arduino is programmed in C++.
I agree, and IMHO, I don't think there is much need for an OO aproach when a procedural one works as well. C++ tends to fatten the code because it associates specific functions (as modifiers) to specific object types (classes). Semantically, it is a great approach, but to the machine, in the end, is the same. I guess C or C++ here is a matter of preference. I tend to stick to C because of its procedural approach, and I don't need to create classes. If I were to deal with complex types (colours, points in space, etc), probably I would consider C++, for encapsulation.
I don't think there is much need for an OO aproach when a procedural one works as well
I'm not sure I know the difference. C has functions and data structures. C++ has functions and data structures. Except in C++ you can say which functions operate on which data structures by placing them in a class. It's all procedural to me.
C++ tends to fatten the code because it associates specific functions (as modifiers) to specific object types (classes).
That in itself does not fatten code. That was the point of my comment above. Let's consider an example:
You have a million data items each one of which is described by some struct in C. You have a bunch of functions that operates on those data items. You are quite likely to include a pointer to that struct type as a parameter to each of those functions so that it knows which one to operate on. You end up with things like read(someInstance,,,), write(somInstance,,,), init(someInstance,,,) etc.
In C++ your struct becomes a class, your functions become members of the class. You end up with someInstance,read(,,,), someInstance.write(,,,) someInstance.init(,,,)
Turns out that when you compile that the executable code is exactly the same!
Of course if you only have one instance, there is little point in having a class and so one could do away with those instance pointers and save a bit of code.
I will agree that perhaps people go overboard on the classes thing when it is not really necessary.
PropGCC should be great with P2. I'd really like a Visual Studio plugin for PropGCC, but I'll take what I can get...
I think PropGCC on P1 is OK, but all the different memory modules and library options, make it a bit complicated, in my opinion.
I think there can be a primary mode on P2 that is just hubexec with the C full library.
Don't see what that couldn't be used most of the time.
But, I'm really hoping that C++ can be used in a reasonable way.
C++ can already be used in a reasonable way with PropGCC for P1. Check out PropWare. I've used it as well. As Heater points out, you need to forget the huge C++ standard library. That might not even be practical on P2.
At this point, I'd go so far as to suggest leaving SPIN a P1-only language (for now). Focus on getting another well-known interpreted language ported. With this, I'd also simplify the structure for P2 Assembler files:
* Get rid of CON and DAT.
* add a "con" directive to define constants.
* for those few constants that are actually controlling boot configuration, make them directives as well.
* Use file extension "pasm" (assuming the P2 official name starts with "P").
I don't think there is much need for an OO aproach when a procedural one works as well
I'm not sure I know the difference. C has functions and data structures. C++ has functions and data structures. Except in C++ you can say which functions operate on which data structures by placing them in a class. It's all procedural to me.
C is now considered a subset of C++. However, it desn't have classes, nor is object oriented, therefore being procedural (functions and procedures are separate from variables, and not modifiers). However, it is possible to write procedural code on C++ (most main files in C++ implement both procedures/functions and classes), as it is a superset of C.
In C++ your struct becomes a class, your functions become members of the class. You end up with someInstance,read(,,,), someInstance.write(,,,) someInstance.init(,,,)
We essentially agree, yes. But the case you mention is not always. In C++ the functions only become members of a class if you include them explicitly. Also, a class is not a structure, as a structure is a composite variable. A class has structures and functions encapsulated.
When I said "fatten the code" I was referring to the C++ code when compared to a equivalent C code, and that if you don't need class. Past a point, you will be better using them, but then your code is object oriented and not stricly procedural. They, as a good rule, you should create separate .cpp and .hpp files for each class, if you are using classes. Your human readable code will be more "fat", but it can render the same machine code (if the compiler is good).
I had created some C++ classes in the past, and I know that 70% of the code was overhead (between declarations, constructors and unused functions/procedures).
At this point, I'd go so far as to suggest leaving SPIN a P1-only language (for now). Focus on getting another well-known interpreted language ported. With this, I'd also simplify the structure for P2 Assembler files:
* Get rid of CON and DAT.
* add a "con" directive to define constants.
* for those few constants that are actually controlling boot configuration, make them directives as well.
* Use file extension "pasm" (assuming the P2 official name starts with "P").
I may have to agree. SPIN is a good language, except for the bytecode part. Or at least it should be possible to compile it to PASM instead, and not the Java kind of compilation.
Yes, yes, it's just that to my mind putting procedures into classes which contain the data they operate on or having free standing procedures that take pointers to stucts as parameters is all "procedural". Come to that they both "object oriented".
I can't help thinking that if 70% of the code was overhead it was not done correctly. Wrapping "class Xyz {...}" around some procedures to make them a class is not much.
You should create a .h file for every .c file anyway.
I have a ton of complaints about C++ but that is not it
Comments
Chip was asking about other devices, so I mentioned XMOS. What is wrong with that?
XMOS, 204 search results
Raspberry Pi, 1870 search results
Arduino, 2630 search results
I'm rooting for the RISC-V concept but I have not followed it's progress much this year.
Last I checked lowrisc was hoping to get test chips out this year and perhaps something available for us in 2017.
The minions are a bit of a mystery but an early document suggests they want:
...creation of software-defined I/O interfaces.
...inefficiencies of crude bit-banging implementations will be avoided by providing additional hardware support in the I/O shim to reduce the number of low-level operations and the need to constantly poll inputs. Specialised ISA extensions will also aid the minions, e.g. by providing low-cost precise timing support. The ideal division of support between ISA and shim is currently under investigation.
...dedicated low-latency interconnect, such as register-mapped FIFOs, between
the main cores and the minions
All of which sounds great.
http://www.lowrisc.org/downloads/lowRISC-memo-2014-001.pdf
.
1) Ditch the name "Propeller" for the Prop2. Propellers are "old" technology in aviation (who wants to buy a ticket on a propeller powered airplane?) and at first blush that is what the name connotes. Perhaps "Turbine" if you need to stick with the "rotational" theme.
2) Ditch the goofy "Jughead (from Archie comics) Hat" logo. Please . . .
Since the P2 chip isn't out yet, Parallax has some time to hire a professional marketing company and make a clean break when the new chip comes out. . .
Ha, ha! On the raspi forum, whenever there is some wild speculation about a future Pi, I suggest that the Pi 4 or whatever will be a RISC-V. After all, lowrisc is being developed across the road in Cambridge, these guys all went to Cambridge Uni they know each other.
It all makes sense to me. RS and Farnel have done well out of the 10 million Raspis sold, they may well put up the cash to get lowrisc chips made for the next Pi.
What is missing of course is an open source GPU for the RISC-V SoC. But that is in hand as well http://miaowgpu.org/
I do agree with Ken, and you do point out COG is a non-standard term.
Because CORE is very widely understood, and a standard term, (even RaspPi users are exposed to that), it makes good sense to release a new device, using that term.
Saves thousands of accumulated hours in explaining...
https://mplabxpress.microchip.com/mplabcloud/ide
Nice! Now I can plug my PICkit 3 to the cloud! (light sarcasm here).
I wonder if some day manufacturers will stop making HDD's for personal use.
Interesting. Fine for the non-parnoid, or those who do not care about version control
I see it cannot resist being adware...
You have compiled in FREE mode.
Using Omnicient Code Generation that is available in PRO mode,
you could have produced up to 60% smaller and 400% faster code.
but the left pane reports
Code : Free : 5312
Code : Pro : 5199
Data: Free : 801
Data: Pro : 799
but I do now see the weasel words 'up to', so I guess 2.12% smaller code and 0.25% less data is some small way towards 60%...
Code seems rather large for main.c and LED_Array.c, and I cannot find MAP or listing report files in the project ?
That said, it was easy to get some changes done.
What is that IDE coded in ?
Regarding... I must say that the MPlabX version that you install on the PC claims the same thing. You can always "optimize" the code by using numerous tricks (I don't remember a single one at the moment) when you write your code. In the end, you will shave a few bytes.
Compiled java, probably. It is the same as with the local version.
Damn, every time I search the forum for something I get zero results!
For the Prop2, I think it needs to be pitched as something quite different. It's good at I/O and multi-processing to support I/O. Its strengths cannot be exploited very well under an ARM-like approach to development, where compiled C code talks to hardware peripherals that deal with real-time phenomena - unless libraries of PASM code can bridge that gap.
From what Heater has said, along with Andre, we would do well to get a C-Lang (maybe) code generator working, so that we could get JavaScript and Python running quickly, along with C. It would be really interesting to see what, exactly, was done to make C-lang support a particular MCU. I know many of you have been clanging about this for years.
I checked a few samples of Andre LaMothe book - apparently the major microcontroller used (Ubicom SX) is not in the market anymore, and sold very expensive in very small quantities. Since I only read the samples, I might have missed out on other things mentioned in the book.
Since the ARM microcontrollers are all around the place, is there an effort by him to use a commercially available ARM microcontroller (which is purchasable from element14 or RS or other nearest retailer) to drive VGA or composite video signals, or to process sound, inside these books?
His older books were about making video games from SX and Propeller chips. His new work is all ARM-centric.
By the way, the SX chips were not that expensive (~$2).
I think PropGCC on P1 is OK, but all the different memory modules and library options, make it a bit complicated, in my opinion.
I think there can be a primary mode on P2 that is just hubexec with the C full library.
Don't see what that couldn't be used most of the time.
But, I'm really hoping that C++ can be used in a reasonable way.
Sorry I lost my example but a few years back I made a simple comparison of the same functionality written in C++ with classes and objects and in C with just structs and functions. I was amazed to find that they both compiled to exactly the same code, byte for byte!
You do have to not use the C++ standard library. Don't use exceptions. Probably don't want to use much in the way of inheritance, virtual methods etc.
Basically, C++ obviously bloats out your code if you ask it to. Note that the tiny Arduino is programmed in C++.
You have a million data items each one of which is described by some struct in C. You have a bunch of functions that operates on those data items. You are quite likely to include a pointer to that struct type as a parameter to each of those functions so that it knows which one to operate on. You end up with things like read(someInstance,,,), write(somInstance,,,), init(someInstance,,,) etc.
In C++ your struct becomes a class, your functions become members of the class. You end up with someInstance,read(,,,), someInstance.write(,,,) someInstance.init(,,,)
Turns out that when you compile that the executable code is exactly the same!
Of course if you only have one instance, there is little point in having a class and so one could do away with those instance pointers and save a bit of code.
I will agree that perhaps people go overboard on the classes thing when it is not really necessary.
* Get rid of CON and DAT.
* add a "con" directive to define constants.
* for those few constants that are actually controlling boot configuration, make them directives as well.
* Use file extension "pasm" (assuming the P2 official name starts with "P").
I'll answer to your comment, point by point.
C is now considered a subset of C++. However, it desn't have classes, nor is object oriented, therefore being procedural (functions and procedures are separate from variables, and not modifiers). However, it is possible to write procedural code on C++ (most main files in C++ implement both procedures/functions and classes), as it is a superset of C.
We essentially agree, yes. But the case you mention is not always. In C++ the functions only become members of a class if you include them explicitly. Also, a class is not a structure, as a structure is a composite variable. A class has structures and functions encapsulated.
When I said "fatten the code" I was referring to the C++ code when compared to a equivalent C code, and that if you don't need class. Past a point, you will be better using them, but then your code is object oriented and not stricly procedural. They, as a good rule, you should create separate .cpp and .hpp files for each class, if you are using classes. Your human readable code will be more "fat", but it can render the same machine code (if the compiler is good).
I had created some C++ classes in the past, and I know that 70% of the code was overhead (between declarations, constructors and unused functions/procedures).
Kind regards, Samuel Lourenço
Yes, yes, it's just that to my mind putting procedures into classes which contain the data they operate on or having free standing procedures that take pointers to stucts as parameters is all "procedural". Come to that they both "object oriented".
I can't help thinking that if 70% of the code was overhead it was not done correctly. Wrapping "class Xyz {...}" around some procedures to make them a class is not much.
You should create a .h file for every .c file anyway.
I have a ton of complaints about C++ but that is not it