New Language Parser for Spin?
DavidZemon
Posts: 2,973
This looks intriguing... maybe a new Spin IDE based on the IntelliJ platform could pop up some day.
http://timjones.tw/blog/archive/2014/11/24/getting-started-with-jetbrains-nitra
http://timjones.tw/blog/archive/2014/11/24/getting-started-with-jetbrains-nitra
Comments
Are you starting a new project to make such an IDE?
I cannot imagine who would want to use such a Java based monstrosity.
And at the moment, no I'm not going to take it on. I have way too many TODOs in PropWare. But I could see it some day down the road. I'd want to see some real interest first though. But I think it'd be great to show the maker world that you can have a powerful IDE without dramatically increasing complexity. I think PropellerIDE is miles ahead of other IDEs for the Propeller, but still lightyears behind Visual Studio or IntelliJ-based IDEs.
Nope. But on your prompting I went to jetbrains.com and downloaded 200 odd megabytes of idea-IC-139.225.3.
Having unpacked it I run idea.sh and get:
"ERROR: Cannot start IntelliJ IDEA
No JDK found. Please validate either IDEA_JDK, JDK_HOME or JAVA_HOME environment variable points to valid JDK installation."
That is enough for me. There are easier ways to program a Propeller.
lol, fair enough. It would be nice if they shipped a JDK with it. For me, the trouble is well worth it. The features provided save me a lot more time than setup costs.
D has always intrigued me. Are you fluent?
What do you have against interpreters?
Or Lazarus (cross platform object pascal) just like BST was written!
Ouch! Then it would be a multi-gigabyte download/install!
And it still would not work, unless they are providing builds for Windws, Mac and Linux. 32 and 64 bit. In all their different variations. And what about different architectures like the Raspberry Pi, Beaglebone etc etc!
Which of course defeats the only point of using Java, it's platform independance.
I can see the attraction of "real compiled language". Not because they are compiled but because of the static type checking, static analysis, and debugger help they offer. Not to mention getting rid of the bloat of a JVM and huge class libraries that have to be dragged along. Run time speed and memory efficiency may also be attractive.
However, if one is going to invest significant time and effort into a project that one would like to share with others surly one should do it in a language that is maximally used and easily available to everyone? This is a social activity not just a technical one.
That language would be JavaScript. Everyone you are going to reach over the net has it already. They have the run time already and it's fast enough for many purposes.
C++, especially written in the style made possible by recent standards would be a good choice. Sure enough your project would be built for almost any platform in short order. Provided you keep way from proprietary API's. C++ can be compiled to JavaScript so it's available everywhere even if people don't have a C++ compiler or the will to use one.
Pascal is obsolete.
Language war anyone?
There are three different downloads of each IDE - one for each of the major OSes. And on Windows, there are two executables packaged: one for 32- and one for 64-bit. As for Pi, BB, etc, they don't have the umph so it's a moot point.
Yep. I've yet to see any [useful] program in any language that is truly platform independent.
If you think any of those are not equally good (if not better) in IntelliJ IDEA relative to whatever you are currently used to in C or C++, you really need to set up a JDK and give IDEA a try. My experience is slightly biased since I've only done educational work in C/C++ (professionally I work in Java), but I've toyed around with some of the static checkers for C++ and they can't compare to IntelliJ's inspection system (in some ways, on functionality, and in every way when it comes to usability). Debugger? I'm calling you out on that one... nothing special about compiled languages when it comes to debugging.
Can't argue that too much. Takes an awful lot of bytes to hold all the goodies. Because of that, it's important to have options like Vim, emacs, SimpleIDE, PropellerIDE and others for slow/small machines. But if you have the space and power (anything dual-core with more than 1 GB of RAM I'd say for IntelliJ-based IDEs), absolutely worth it.
You're talking about mbed's system. Awesome tooling. I love it. But talk about performance issues. I shutter to think what would happen if you tried to take all of the functionality of IntelliJ IDEA and stuck in javascript lol. Also.... I hate javascript. Maybe because I never took the time to become proficient? I'm amazed that people are able to write such large and complex programs in such an awful language. I like structure, thank you very much. But, I can't argue the usefulness of a fully web-based IDE like mbed.
Kye's OmniaCreator. Awesome project. Tons of potential. I just wish CLion came out 12 months earlier so he could have built it from that instead of QtCreator. But nonetheless, a huge step forward from Parallax's traditional IDEs.
Obviously Python is the way to go. Or, what I'd prefer even more would be Java compiled to native binary. Unfortunately, the only project I've found for that is going on 6 years without an update: https://gcc.gnu.org/java/ I love the Java syntax and structure it provides. Far easier to work in than C++ and the errors are significantly more understandable. It is more limiting, and for an embedded system that could sure get annoying... but on desktop machines I love it.
I am trying to learn Spanish. ;-)
I've been thinking Hindi would be interesting to try since now the development teams I manage are in India! It would be fun to try and meet them half way. :0)
I just started using D last week. I'm really liking it so far but am definitely not fluent yet. Unlike with C++, I can actually spend time writing code instead of spending all my time either waiting for the compiler to finish taking forever or fighting the compiler over syntax ambiguities like templates such as x<y<y>> which say "error: unexpected >>" and trying to decode cryptic three-page-long errors! It also has unittests and many other features that make testing and debugging trivial, such as the fact that the compiler is much better (but still not perfect) at producing understandable compile error messages. I haven't needed a debugger on it at all yet and haven't really had any hard-to-find bugs yet either.
I only really have a problem with interpreters when I want something to be fast. A parser/compiler definitely falls into this category - I (like everyone else) can't stand compilers that take forever to compile stuff. If it doesn't need to be fast, I'd probably write it in Lua, and possibly not even use LuaJIT (however, this may change now that I've discovered D). My problem with interpreters that don't do JIT is that they are slow because of the interpreter overhead. My problem with interpreters that do do JIT is that they usually have higher-level functions than those found in compiled languages, functions that are relatively slow but still used very often. For example, even if you have a JITing JS or Lua interpreter, you still have to deal with indexing hash tables, an extremely common operation which is relatively slow, while a compiled language would just use a struct. However, I don't dislike all interpreted/JIT'd languages/systems - LLVM, which can be interpreted or JIT'd, but is usually compiled, is my favorite compiler system. I'm OK with LLVM because it is low level enough to be compilable into fast machine code. In fact, if there was an LLVM backend for the propeller, I might prefer using C over Spin on the prop (but would still use lots of pasm).
I dislike Java and .NET because they are interpreted, but I also have other, possibly bigger reasons - Java has subtle design flaws that only show up after you have hundreds of lines of code written in it, and .NET is a microslop product and is highly inferior to LLVM.
FYI The PropBasic compiler is written in Lazarus.
P.S. Pascal is NOT dead. In fact I've even thought about writing a Pascal compiler for the Propeller. Any interest ?
Bean
To me, the main appeal for Microcontroller Languages has been to have a largely compatible version on the PC.
Here, we use FreeBasic, Lazarus/FreePascal, ( and also Delphi & C ).
FreeBasic is small and nimble and has a compact install & their debug is improving.
Works great for command line testers on a std PC.
Lazarus/FreePascal continues to improve, and is quite close to Delphi - we found there were more 'rules'; than brick-walls when porting projects from Delphi to Lazarus. I'm sure that continues to improve.
Lazarus has better support for larger projects, and cross platform, than FreeBasic.
The Ardunio success is partly because they managed to 'hide' the housekeeping complexities of C.
I wonder if a similar approach could be used for a PropPascal ?
A System that allowed PropBasic // PropPascal // PASM // PropGCC to co-operate in any mix, would be compelling.
It would certainly be a dramatic way to underscore the Prop's 8 cores
LLVM would be perfect for this. Why isn't there an LLVM backend for the propeller? Suddenly, virtually any language that had an LLVM compiler would work on the propeller.
Ray
Mutliple backends are really needed, as it needs more than just LLVM - ideally, any language should also be able to generate PASM (ie COG native) binaries.
There will always be a size limit on those, but it makes the cores much more accessible.
Many compilers produce ASM code that can further be manually tuned, and most allow in-line ASM.
Of course, In-line ASM use would only 'fit' into small modules that were destined for COG native operation.
Just thought of another fork of this (given the thread is called "New Language Parser for Spin?" ), would be a Pascal front end that created Spin byte codes.
That combination uses the ROM in P1, (and also P2-spin work) but provides a language that is less niche.
I disagree. Only one backend should be necessary - big code, little code, inline PASM, or whatever. The compiler should try to let code run natively for the longest stretches possible, and only swap stuff out (in large but not necessarily full-cogram blocks) if and when necessary. This will make even more sense on the P2, because of the hubram streamer. It should try to keep helper functions in cogram for any duration they are in heavy use alongside other, longer-running functions, to avoid constantly swapping stuff back and forth from hubram to cogram. A small loader kernel, such as that in CP_HYDRAMAN_005.spin, which I have attached, should always live somewhere in cogram to facilitate code loading. To avoid wastefully loading code that might get skipped by conditional branches, the compiler should have it so the loader kernel should only load code up to conditional branches and then load another block depending on what happens, unless the branch is very likely to go one way, in which case the loader should keep going.
This is different from propgcc's fcache in that it would always be used and not just inside tight loops. It's like being able to fcache lots of functions at once.
The only argument for multiple backends is for a second backend that emits compressed code. However, having access to the full capabilities of the propeller means that compression isn't as necessary and is also slightly less feasible because there are more possible instructions.
If you don't believe me that this is fast, Hydraman's main loop block calls 10 other blocks, many of which are fairly large, making a total of 22 block loads per game iteration. The main game loop manages to get 60FPS, even with the fact that the loader kernel used by hydraman misses the hub cycle by one instruction every time through the loop. I challenge anyone to make an exact clone of hydraman in regular old propgcc, and hereby declare it impossible. It must be an exact clone - the two versions must be indistinguishable by playing them. It must use no external ram. Only the game engine cog has to be written in C (no inline asm!) - the graphics, sound, and input drivers can use the original PASM, and the controller glue can be in whatever you want. By the way, the graphics alone from hydraman use 4096 longs, and you have to use those same graphics in your clone.
P.S. Note that I have not said that this is impossible to do on a propeller. It has already been done on a propeller, except in PASM. I am saying it is impossible to do in PropGCC. That means that I haven't guaranteed that it is possible to do.
Also, Turbulence uses this technique. Turbulence is by far the most complex piece of code I've ever seen run on the propeller. It swaps stuff out from EEPROM, so I'm allowing the PropGCC hydraman clone to use EEPROM if necessary (not like there's any good you'll be able to use it).
This comes down to semantics, as I would describe l something able to create "big code, little code, inline PASM, or whatever." as actually having multiple backends.
Sure, it would be nice to have something automatically manage those multiple backends, as you describe.
Presumably creating a compiler backend/code generator/optimizer for a new architecture is a significant piece of work that requires expertise in the compiler in question. If you are not going to do it yourself finding such an expert might be tricky. Indeed. GCC supports C, C++, Objective-C, Objective-C++, Java, Fortran, Ada, and Go is that not enough to be getting on with? There is a Pascal front end for GCC but that seems to have been abandoned years ago.
It's not so simple though. Having got the code generator working there is a lot of work to do in creating the run time and libraries each language requires.
I don't mean a backend manager. I mean exactly one backend that wouldn't see any difference between differently sized code, besides the fact that some happen to fit entirely inside of one cog while others don't.
You could have two functions, one big enough to need block swapping and the other small enough to fit entirely in one cog (even with all of its functions), and cognew them both into cogs, and the big one would use block loading and the small one wouldn't, both with the same backend.
The backend would take LLVM IR and spit out pseudo-PASM that has long, global addresses, and then look at the pseudo-PASM and figure out the best way to cut it into blocks and then fix all of the addresses to either be local cogram addresses or loader calls. If it sees that only one block is necessary for everything one cog will be doing, it will omit the loader kernel from that cog.
I plan to do it myself if I ever find the time (hah!). I don't really know a terrible amount about LLVM, but it's very high on the list of things I want to learn, not only for the purpose of a propeller backend.
What's so hard about runtime libraries? Is it because they are so huge, or what?
Pascal frontend has been replaced by FPC, on the edge of development, used to create BST (along with the lazarus ide) for the prop with has no rivals no matter how hard people bloviate about GCC. It's not about the tools but the craftsman. Although you can save much pain of accidental complexity by using a good toolset. Sequence, alternate, repeat.
dp
FPC is neat in that it is a really fast compiler. On the other hand moving our code around to different platforms with different processors has been a pain. Finding developers who want to work on it is tricky.
Object Pascal as a language does not offer anything over other languages, say C++, except perhaps some more fussy type checking. Do correct me if I'm wrong. So there are no major compelling features that make it preferable to anything else. So it comes down to personal preference in syntax style mostly.
Hence my comment, Pascal is obsolete.
Sequence, select, iterate, iterate, iterate...iterate, iterate, Control-C. Hmm.. that needs debugging
Edit. Oh I forgot. The range checking in Pascal is occasionally helpful in locating bugs. That is a plus point.
I'm not quite following this ?
Do you have a working Pascal flow for the Prop, using FPC frontend ?
I did find this on the web, about FPC on RaspberryPi
http://www.maketecheasier.com/writing-pascal-programs-on-raspberry-pi/
FPC seems to be able to run a native compiler and also shows a vanilla Dos-Style Editor.IDE on the RaspPI
Not quite as polished as Lazarus IDE, but looks quite usable.
Searching on my drives here, I find the DOS-ide fp.exe on some older FPC Win releases (2.4.4 & earlier)
FPC is a "DOS-style" command line driven compiler.
Lazarus and other IDE's for FPC are separate programs. They are IDE's.
I have had success using fpc on the Pi and other small ARM boards.
Yes, as the link above says, fpc.exe is the compiler (outer), and fp.exe is the simpler DOS-like IDE choice, ( & the larger lazarus.exe is the Win GUI-IDE/Debug choice)
Then you could look at what application actually get written, and tailor the next generation for the observed uses. This might be advantagous over accomdating every what if that never gets used.
So much so that the GCC guys have been redoubling their efforts to keep up with it. There seems to been some cross fertilization of ideas.
Result is that a an LLVM target for the Prop is not necessary. GCC is quite OK.
Or at least that is what I glean from my very limited understanding of such things.
All in all, I'm overjoyed that we have GCC for the Prop. A big thanks to everyone who has made that possible.