I have noticed that p2gcc does not have basic functions like high(), low(), setdirection(), toggle(), pause() that are used on the P1.
I moved them over from simpletools and they work fine but I'm stuck.
Should they be written in P2 code which means I would have to write them in assembly instead of C code.
Mike
Sample:
void high(int pin)
{
unsigned int mask;
if (pin > 31)
{
mask = 1 << (pin - 32);
OUTB |= mask;
DIRB |= mask;
}
else
{
mask = 1 << pin;
OUTA |= mask;
DIRA |= mask;
}
}
void low(int pin)
{
unsigned int mask;
if (pin > 31)
{
mask = 1 << (pin - 32);
OUTB &= ~mask;
DIRB |= mask;
}
else
{
mask = 1 << pin;
OUTA &= ~mask;
DIRA |= mask;
}
}
void sleep(int sec)
{
unsigned int t;
t = CNT;
while (sec--)
waitcnt(t += CLKFREQ);
}
void msleep(int millis)
{
unsigned int t;
t = CNT;
while (millis--)
waitcnt(t += CLKFREQ / 1000);
}
I've been focusing on the most frequently used standard C functions. Maybe it's time to port the whole simple library over. In general, the code could be left unchanged in C. However, because the P2 has 64 I/O pins changes have to be made like you did with high and low. It might make sense to implement these functions in assembly since high and low can be performed with DRVH and DRVL.
I've been focusing on the most frequently used standard C functions. Maybe it's time to port the whole simple library over. In general, the code could be left unchanged in C. However, because the P2 has 64 I/O pins changes have to be made like you did with high and low. It might make sense to implement these functions in assembly since high and low can be performed with DRVH and DRVL.
Didn't someone already offer to port over the Simple Libraries? Was it Roy?
Eric: Do you intend to keep using the _IMPL method of associating function prototypes with the files containing their implementation? If so, I will have to add those clauses to all of the header files I move over from proplib. You've already done that for the ones you've moved over.
At least for now I plan to keep using that. For PropGCC _IMPL will be defined as empty (I've tried to keep compiler.h, at least, working with PropGCC) and it does serve as documentation on where to find the source code.
Thanks @"Dave Hein" for fixing my program problem.
I had this library to drive the OLED display and it just didn't work. I would get one piece to work but as soon as I added some more code nothing worked. Went round and round and still couldn't figure it out.
I just tried again and I got this message:
Object 3 is an odd size - 5174
Padding address to 4-byte boundary
Now everything works as it should. Now I can start hacking at your code again.
Glad that this fixes your problem. I've been working around this problem for quite a while by manually padding out my char arrays in my programs to make them a multiple of 4 bytes.
Just want people to know that I am no longer actively working on p2gcc. As the title of this thread indicates, I developed p2gcc so that I, and others could write C code for the P2 until GCC became available for it. I was hoping that Parallax would become engaged in C development tools, and that they would coordinate the activity. However, this never happened.
There are now other tools for C development on the P2, such as fastspin and Catalina. I would encourage anyone that is interested in writing C code for the P2 to explore using one of those tools.
While I can understand abandoning p2gcc since fastspin/C and Catalina are well underway, it is currently the only way to compile C++ code for the P2. I guess you could use the RISC-V compiler and Eric's emulator but p2gcc is/was the only native way to do it.
While I can understand abandoning p2gcc since fastspin/C and Catalina are well underway, it is currently the only way to compile C++ code for the P2. I guess you could use the RISC-V compiler and Eric's emulator but p2gcc is/was the only native way to do it.
Can p2gcc compile C++? I don't think it can at the moment, although changing that should be relatively simple.
GCC for RISC-V is definitely an option, and even supports a lot of the native P2 hardware via some P2 specific RISC-V extensions.
It is a pity that Parallax hasn't done more for (non-Spin) tools development. Perhaps now that final P2 hardware is imminent that will change.
I thought all p2gcc did is take the assembly output of PropGCC and convert it to P2 assembly. Shouldn't that work for C++ as well as C? Of course, there will be library work to do. Maybe that isn't done yet. How much work is required to just get the basic C++ working without the standard C++ library?
The p2gcc script file would just need a minor tweak to handle *.cpp files. The bigger issue is providing the C++ library, and implementing the startup code needed for C++.
I was surprised that with some effort p2gcc was able to take all the Micropython files (assembled for P1 in COG mode) and convert them over to P2 PASM, then link them. If there were alloca() and setjmp/longjmp() implementations present in the P2 libraries instead of my dummy placeholders I added for preventing link errors I'd potentially be able to try it out.
Update: P2 image size was 176kB for the minimal port.
Thanks Dave, I mapped alloca to be __builtin_alloca and the compiler and linker were happy about it. It still can't find the setjmp/longjmp implementation anywhere though I think I have an older version of p2gcc that simply didn't include it. I'll go check the latest version to try further.
The latest zipfile, p2gcc006.zip, was posted at the beginning of February. I added setjmp and longjmp about 2 weeks later. You would have to download the latest source from GitHub to get this. I could make another zipfile with the latest code, but I am away from my Windows computer right now. I could post a new zipfile next week if you need it.
Yeah I found your update github version earlier today and am using it now. Thanks again Dave. I can compile for P1 in cog mode and now it assembles/translates/assembles/links okay including the missing alloca and setjmp/longjmp stuff I had problems with before.
However I have found that it doesn't really work in LMM mode (was it meant to or is that just experimental?) and I get a range of output and errors on some files during the s2pasm and p2asm steps. It really doesn't like the ".compress default" directives that are generated by PropGCC when compiling in lmm mode. It also complains that Immediate values must be between 0 and 511 and some ocode types are not supported (probably related to the .compress default thing as well, seems to be the same line). Am using PropGCC 4.6.1 and and some example compile options are these:
Here's some snippets of the sorts of errors I see in various files (but not all). Perhaps I'm missing something important about LMM mode...? I must be missing the actual runtime LMM VM code too as later when I tried to link I noticed p2link also complains:
__LMM_FCACHE_LOAD is unresolved
So where exactly is this LMM runtime located in the p2gcc lib? I only see it in the P1 object code lib area, and already compiled to object code with no P1 source.
From s2pasm:
PUSH or POP 3 13
PUSH or POP 3 15
Found mov sp,lr
Found mov sp,lr
PUSH or POP 6 10
PUSH or POP 6 15
ERROR: Not push or pop
call #__LMM_POPM
....
from p2asm:
188: ERROR: default is undefined
.compress default
label .compress is already defined
219: ERROR: default is undefined
.compress default
label .compress is already defined
464: ERROR: default is undefined
.compress default
label .compress is already defined
612: ERROR: default is undefined
.compress default
188: ERROR: Opcode type 20 is not supported
.compress default
219: ERROR: Opcode type 20 is not supported
.compress default
464: ERROR: Opcode type 20 is not supported
.compress default
282: ERROR: Immediate value must be between 0 and 511
mov lr,__LMM_RET
288: ERROR: Immediate value must be between 0 and 511
mov r0,__LMM_FCACHE_START+(.L26-.L24)
294: ERROR: Immediate value must be between 0 and 511
mov lr,__LMM_FCACHE_START+(.L27-.L24)
...
Update: P2 image size was 176kB for the minimal port.
Does that run ? (or is close to running?)
If I followed, that is P1 GCC generated native code (for a large virtual cog?), translated to P2 PASM and then you have a native P2, but an easy-subset of P2 that maps to P1.
How does that P1-Emulation, compare with RISC V emulation ?
That 176kB looks quite a good footprint ? eg I find this for Cortex M4 example
MicroPython provides a REPL (Read Evaluate Print Loop) mode that allows users to quickly test and running code through a terminal application.
The following table lists the NuMicro microcontrollers supported status by NuMicroPy.
MCU Board Firmware ROM size Firmware RAM size
M487 NuMaker-PFM-M487 362 KB 77 KB
M487 NuMaker-IOT-M487 373 KB 77 KB
I didn't test the LMM mode extensively, and I hadn't encountered the .compress directive before. p2gcc doesn't understand this directive, so it might be treated like a label or option depending on how it's used.
LMM refers to the mode the P1 GCC compiler is set. Normally p2gcc uses the P1 COG mode, and then converts it to P2 assembly. I changed p2gcc to optionally use P1 LMM instead of the COG mode to see if that produced more efficient P2 code. It turns out that the results were about the same as COG mode.
Sorry, but p2gcc is one kludge on top of another kludge. The right way to do it is to modify the P1 GCC compiler to produce native P2 code. Maybe someday Parallax will see the need for a full blown C++ compiler for the P2, and fund a project to develop it. However, I lost hope that this will ever happen.
The right way to do it is to modify the P1 GCC compiler to produce native P2 code. Maybe someday Parallax will see the need for a full blown C++ compiler for the P2, and fund a project to develop it. However, I lost hope that this will ever happen.
At this point it might not make sense to modify the existing PropGCC to generate P2 code since it is based on a very old version of GCC. Maybe that's the problem. Starting over would be a big project and hence not something anyone wants to do without some support from Parallax.
@Cluso99 I believe Catalina supports ANSI level C. More recent C software making use of C99 capabilities (or later standards) may not necessarily work with that compiler without patching accordingly where and if possible.
While this may not be as much of an problem if you are just writing something from scratch and can live within its constraints, it can fairly quickly become one when porting existing software whose requirements exceed what ANSI C delivers.
gcc is currently at version 9.x, Parallax uses 4.x? the latest Propgcc builds with 6.x, the only way to make gcc useful for the P2 would be if someone would be able to get the gcc team to accept the new backend back into the main code.
Every other attempt is futile and will result in out dated versions, like we have now.
As Roy mentioned Parallax is quite stretched right now, developing the P2 was to feed chip and family, producing it will take a huge chunk of money Ken has to come up with. As far as I understand Parallax as a company, they want to stay private, in my opinion the right decision, going public would bring in lots of money but will - without doubt - destroy the 'vision' those brothers have, right now they can give a whatever to non existing shareholders and do what they think is the right thing to do.
I doubt we would have a P2 if ken and chip would have to do quarterly progress reports, to shareholders, defending core principles Parallax has to it's customer base, it's goal and it's employees.
Maybe I am biased, but I visited a lot of workplaces in my life and - sure - the place was prepared for a Parallax Event. But walking around there, watching interactions, just lurking, I saw a lot of things which - hmm - made me thinking.
I did a lot grunt work in my life, driving trucks and forklifts, digging trenches, cleaning silos, unloading ships, and my general experience is that the worst jobs get paid the less amount of money and get the least attention from the company, especially when gone public. The bean counters don't care.
My personal experience visiting there showed me that Parallax does care about it and I was overall quite impressed. Attention to detail everywhere. Good light everywhere, clean. Arranged by work flow. Not just offices, but also the ware house. Just well done.
But all of this cost money and my guess is that ken will hold as much as he can back in case of a second failure of the silicon P2. If that one is OK he will might have some reserve funds to spend on gcc.
And he should, if he can get it back into main gcc.
someone has to write the @potatohead long posts, he is busy.
Am I missing something that Catalina for P2 cannot do?
Same thing happened on P1. Ross did all the work with Catalina and then it was redone with PropGCC.
I don't think Catalina has any C++ support. That is probably the biggest reason.
Any support for CMake (or most any other modern build system, such as Gradle) would need to be added manually... not for the faint of heart (adding PropGCC support was difficult enough, but it would have been harder if GCC wasn't supported already).
Rust won't happen with Catalina (though I'm not sure it will happen with GCC either... I think Rust might be dependent on LLVM).
The current focus around these parts is for compiling MicroPython.
Micropython already compiles for P2.
Thats true, but I'm curious whether this can be done natively rather than via a competing architecture
It basically is being done natively, it's just split up into two pieces -- at build time the C code is converted from C to RISC-V opcodes, then at run time those RISC-V opcodes are compiled to P2 instructions. So any kind of loop that fits in cache runs at native speed, e.g. a pin toggle loop like:
for(;;) {
_pinnot(0);
}
will end up compiling to P2 code like:
mov x8, #0
loop:
drvnot x8
jmp #\loop
so it'll run almost as fast as possible. (A "rep" instruction would be faster in some cases.)
The only drawback is the code cache, which takes up RAM and which needs to be big enough to hold the main interpreter loop. In the latest MicroPython build that's become pretty much moot because we can now use RISC-V compressed instructions, which makes the program smaller than a "completely" P2 native version could be even when the cache is taken into account.
Comments
I moved them over from simpletools and they work fine but I'm stuck.
Should they be written in P2 code which means I would have to write them in assembly instead of C code.
Mike
Sample:
At least for now I plan to keep using that. For PropGCC _IMPL will be defined as empty (I've tried to keep compiler.h, at least, working with PropGCC) and it does serve as documentation on where to find the source code.
I had this library to drive the OLED display and it just didn't work. I would get one piece to work but as soon as I added some more code nothing worked. Went round and round and still couldn't figure it out.
I just tried again and I got this message:
Object 3 is an odd size - 5174
Padding address to 4-byte boundary
Now everything works as it should. Now I can start hacking at your code again.
Mike
There are now other tools for C development on the P2, such as fastspin and Catalina. I would encourage anyone that is interested in writing C code for the P2 to explore using one of those tools.
Can p2gcc compile C++? I don't think it can at the moment, although changing that should be relatively simple.
GCC for RISC-V is definitely an option, and even supports a lot of the native P2 hardware via some P2 specific RISC-V extensions.
It is a pity that Parallax hasn't done more for (non-Spin) tools development. Perhaps now that final P2 hardware is imminent that will change.
Update: P2 image size was 176kB for the minimal port.
However I have found that it doesn't really work in LMM mode (was it meant to or is that just experimental?) and I get a range of output and errors on some files during the s2pasm and p2asm steps. It really doesn't like the ".compress default" directives that are generated by PropGCC when compiling in lmm mode. It also complains that Immediate values must be between 0 and 511 and some ocode types are not supported (probably related to the .compress default thing as well, seems to be the same line). Am using PropGCC 4.6.1 and and some example compile options are these:
propeller-elf-gcc -I. -I../.. -Ibuild -Wall -std=c99 -mlmm -S -Os -DNDEBUG -c -MD -o build/py/objfilter.s ../../py/objfilter.c
s2pasm -lmm -p/Users/roger/Applications/p2gcc/lib/prefix.spin2 somefile.s
p2asm -c -o somefile.spin2
Here's some snippets of the sorts of errors I see in various files (but not all). Perhaps I'm missing something important about LMM mode...? I must be missing the actual runtime LMM VM code too as later when I tried to link I noticed p2link also complains:
__LMM_FCACHE_LOAD is unresolved
So where exactly is this LMM runtime located in the p2gcc lib? I only see it in the P1 object code lib area, and already compiled to object code with no P1 source.
From s2pasm:
PUSH or POP 3 13
PUSH or POP 3 15
Found mov sp,lr
Found mov sp,lr
PUSH or POP 6 10
PUSH or POP 6 15
ERROR: Not push or pop
call #__LMM_POPM
....
from p2asm:
188: ERROR: default is undefined
.compress default
label .compress is already defined
219: ERROR: default is undefined
.compress default
label .compress is already defined
464: ERROR: default is undefined
.compress default
label .compress is already defined
612: ERROR: default is undefined
.compress default
188: ERROR: Opcode type 20 is not supported
.compress default
219: ERROR: Opcode type 20 is not supported
.compress default
464: ERROR: Opcode type 20 is not supported
.compress default
282: ERROR: Immediate value must be between 0 and 511
mov lr,__LMM_RET
288: ERROR: Immediate value must be between 0 and 511
mov r0,__LMM_FCACHE_START+(.L26-.L24)
294: ERROR: Immediate value must be between 0 and 511
mov lr,__LMM_FCACHE_START+(.L27-.L24)
Does that run ? (or is close to running?)
If I followed, that is P1 GCC generated native code (for a large virtual cog?), translated to P2 PASM and then you have a native P2, but an easy-subset of P2 that maps to P1.
How does that P1-Emulation, compare with RISC V emulation ?
That 176kB looks quite a good footprint ? eg I find this for Cortex M4 example
Sorry, but p2gcc is one kludge on top of another kludge. The right way to do it is to modify the P1 GCC compiler to produce native P2 code. Maybe someday Parallax will see the need for a full blown C++ compiler for the P2, and fund a project to develop it. However, I lost hope that this will ever happen.
Am I missing something that Catalina for P2 cannot do?
Same thing happened on P1. Ross did all the work with Catalina and then it was redone with PropGCC.
Micropython already compiles for P2.
While this may not be as much of an problem if you are just writing something from scratch and can live within its constraints, it can fairly quickly become one when porting existing software whose requirements exceed what ANSI C delivers.
Thats true, but I'm curious whether this can be done natively rather than via a competing architecture
Every other attempt is futile and will result in out dated versions, like we have now.
As Roy mentioned Parallax is quite stretched right now, developing the P2 was to feed chip and family, producing it will take a huge chunk of money Ken has to come up with. As far as I understand Parallax as a company, they want to stay private, in my opinion the right decision, going public would bring in lots of money but will - without doubt - destroy the 'vision' those brothers have, right now they can give a whatever to non existing shareholders and do what they think is the right thing to do.
I doubt we would have a P2 if ken and chip would have to do quarterly progress reports, to shareholders, defending core principles Parallax has to it's customer base, it's goal and it's employees.
Maybe I am biased, but I visited a lot of workplaces in my life and - sure - the place was prepared for a Parallax Event. But walking around there, watching interactions, just lurking, I saw a lot of things which - hmm - made me thinking.
I did a lot grunt work in my life, driving trucks and forklifts, digging trenches, cleaning silos, unloading ships, and my general experience is that the worst jobs get paid the less amount of money and get the least attention from the company, especially when gone public. The bean counters don't care.
My personal experience visiting there showed me that Parallax does care about it and I was overall quite impressed. Attention to detail everywhere. Good light everywhere, clean. Arranged by work flow. Not just offices, but also the ware house. Just well done.
But all of this cost money and my guess is that ken will hold as much as he can back in case of a second failure of the silicon P2. If that one is OK he will might have some reserve funds to spend on gcc.
And he should, if he can get it back into main gcc.
someone has to write the @potatohead long posts, he is busy.
Enjoy!
Mike
I don't think Catalina has any C++ support. That is probably the biggest reason.
Any support for CMake (or most any other modern build system, such as Gradle) would need to be added manually... not for the faint of heart (adding PropGCC support was difficult enough, but it would have been harder if GCC wasn't supported already).
Rust won't happen with Catalina (though I'm not sure it will happen with GCC either... I think Rust might be dependent on LLVM).
It basically is being done natively, it's just split up into two pieces -- at build time the C code is converted from C to RISC-V opcodes, then at run time those RISC-V opcodes are compiled to P2 instructions. So any kind of loop that fits in cache runs at native speed, e.g. a pin toggle loop like: will end up compiling to P2 code like: so it'll run almost as fast as possible. (A "rep" instruction would be faster in some cases.)
The only drawback is the code cache, which takes up RAM and which needs to be big enough to hold the main interpreter loop. In the latest MicroPython build that's become pretty much moot because we can now use RISC-V compressed instructions, which makes the program smaller than a "completely" P2 native version could be even when the cache is taken into account.