Shop OBEX P1 Docs P2 Docs Learn Events
G-code - Page 2 — Parallax Forums

G-code

2

Comments

  • Peter JakackiPeter Jakacki Posts: 10,193
    edited 2017-06-12 12:21
    I did have a quick look at it to give me a bit of an idea of how to go about it perhaps but I see that it would be good to hook up some motors or maybe get hold of some basic mechanism and feed g-code into it. Of course, you can play an old Atari classic like Breakout while you are waiting too. Perhaps you could even access g-code via the Ethernet with EASYNET as well.


    btw brucey, you're on! I think I'd like to play with some motors again.
  • potatohead wrote: »
    P2 is looking to be a prime contender. I'm eager to see us get to real silicon. May make for very differentiated, cost effective product offerings in this space.

    P2 is an insignificant small MCU in a world dominated by the cost of precision parts ....

    optical axis shafts ...

    linear motion ball screw ...

    anti-backlash nuts ....

    custom/adjustable preload ...

    motor and brake

    gears/reducers ....,

    optical encoders ....

    tool changers ...




    ( and now we talk about flat surfaces and tolerances ... )






  • Peter
    btw brucey, you're on! I think I'd like to play with some motors again.

    Motors are always fun and especially cool when you can make them do something useful.

    I always meant to get around to the etch-a-sketch thingy, shown somewhere in the forums. This type of project would let you build and test code for a very minimal cost, although somewhat useless besides that. Of course it would not have a z axis, but at least you could test linear and circular interpolation code, which in my eyes is of the utmost importance, for any multi-axis synchronized machine.

    In all honesty, I have faith in your abilities... I just wish you were making your attempt in C :)
  • David BetzDavid Betz Posts: 14,511
    edited 2017-06-12 14:26
    idbruce wrote: »
    Peter
    btw brucey, you're on! I think I'd like to play with some motors again.

    Motors are always fun and especially cool when you can make them do something useful.

    I always meant to get around to the etch-a-sketch thingy, shown somewhere in the forums. This type of project would let you build and test code for a very minimal cost, although somewhat useless besides that. Of course it would not have a z axis, but at least you could test linear and circular interpolation code, which in my eyes is of the utmost importance, for any multi-axis synchronized machine.

    In all honesty, I have faith in your abilities... I just wish you were making your attempt in C :)
    If it works what difference does it make what language it is written in? It seems to me that Tachyon is the most likely candidate for this because I believe it has the highest code density of any Propeller language.

  • idbruce wrote: »
    Peter
    btw brucey, you're on! I think I'd like to play with some motors again.

    Motors are always fun and especially cool when you can make them do something useful.

    I always meant to get around to the etch-a-sketch thingy, shown somewhere in the forums. This type of project would let you build and test code for a very minimal cost, although somewhat useless besides that. Of course it would not have a z axis, but at least you could test linear and circular interpolation code, which in my eyes is of the utmost importance, for any multi-axis synchronized machine.

    In all honesty, I have faith in your abilities... I just wish you were making your attempt in C :)

    The whole reason for Tachyon in the first place was making the Prop do what I wanted to do without running into memory limits and slow slow code. I know everybody says P2 will do it but that only makes me chuckle as of course P3, P4, or P5 will do it too, but we have none of those and not even P2 although that has been dangled in front of us for years. What happens if Chip chokes on a chicken bone? All that effort and expectation being put into a promise goes up in smoke whereas you and I need to be practical and deal with reality which one day may or may not be P2. Then you can C whatever you like to your hearts delight :)

    Imagine if none of us never bothered with using the first CPUs we had available because it wasn't good enough and we salivated year and year for that perfect chip? It reminds me of that story about how two men go out to fish for the day, one is determined only to come back with big fish while the other was happy to come back with whatever he caught. One guy went hungry that day but we want to be like the other guy and enjoy what we have now and when the big one comes along, well, we'll enjoy that too.


  • Bruce, I looked back at the Teacup thread from 2 years ago, and it looked like we were close to getting it to work. The problem was that the code running under the CMM mode was just a bit too slow to keep up. The usual solution for this is on the Prop is to determine where the bottle-neck is and put that small piece of code into it's own cog running pasm code. Ultimately, it looked like the 32K RAM limitation required making some tradeoffs with the code. To me it would make sense to break up the Teacup code into two pieces -- one piece would decode a g-code file and write the motor control commands to a file, and the other piece would then read the motor control file and execute the commands.
  • Dave
    Bruce, I looked back at the Teacup thread from 2 years ago, and it looked like we were close to getting it to work. The problem was that the code running under the CMM mode was just a bit too slow to keep up. The usual solution for this is on the Prop is to determine where the bottle-neck is and put that small piece of code into it's own cog running pasm code. Ultimately, it looked like the 32K RAM limitation required making some tradeoffs with the code. To me it would make sense to break up the Teacup code into two pieces -- one piece would decode a g-code file and write the motor control commands to a file, and the other piece would then read the motor control file and execute the commands.

    I know we were very close to getting it to work, but it just seemed like the obstacles kept coming, and I also have been tempted more than once to review that code.

    I still have all of my parts, except for the cabinet, because I used the cabinet for another project.

    However, at this point, my boards are now setup for two Propeller chip coordination, which I think is a better plan, because it provides the additional IO pins necessary for a decent interface, plus it removes some of the burden from memory. The controller boards, linear actuators, stepper drivers, cables, extruder, etc.... are just collecting dust at this time, which is really a crying shame.

    If I am not mistaken, which I could be, I believe the other software that I was working on is pretty close also, except for the interpolation and trajectory planning.

    Here is the main issue though, while I love everything Parallax related, time is a definite factor. I am currently working on two machines, which is slow going already, and to add a third machine into the mix, would become overwhelming. As mentioned elsewhere in the forum, I can easily resolve some of my CNC software issues by simply choosing Arduino for specific projects, which require interpolation and trajectory planning, such as the 3D printer.

    However, I am a person that does not accept defeat very well, so perhaps I could look into it a little more when I have time, but keep in mind that my mother is very sick and requires a lot of my attention, and that I have two other projects going.
  • David Betz
    If it works what difference does it make what language it is written in? It seems to me that Tachyon is the most likely candidate for this because I believe it has the highest code density of any Propeller language.

    It makes a lot of difference to me, because of any code alterations that may be required to make it adaptable to another type of machine.
  • @Dave Hein

    Alright, I have downloaded the last zip file on page 8 of the Teacup thread. I have browsed a few of the .c and .h files, and all I can say is that sure was some complicated programming :)

    It will take a little time to get a good grasp on the code again and additionally, I will have to compare my current two board setup against config.h, just to see if it is even worth the trouble of opening up this can of worms.

    I suppose that if the current controller boards will not easily adapt, I could get a project board to putz around.

    I will also examine the possibility of splitting this code for the current controller boards.
  • idbruce wrote: »
    David Betz
    If it works what difference does it make what language it is written in? It seems to me that Tachyon is the most likely candidate for this because I believe it has the highest code density of any Propeller language.

    It makes a lot of difference to me, because of any code alterations that may be required to make it adaptable to another type of machine.
    Ah, I guess that makes sense. Can't just use it as a black box if you need to adapt it for a particular machine.

  • MJBMJB Posts: 1,235
    David Betz wrote: »
    idbruce wrote: »
    David Betz
    If it works what difference does it make what language it is written in? It seems to me that Tachyon is the most likely candidate for this because I believe it has the highest code density of any Propeller language.

    It makes a lot of difference to me, because of any code alterations that may be required to make it adaptable to another type of machine.
    Ah, I guess that makes sense. Can't just use it as a black box if you need to adapt it for a particular machine.
    Tachyon is actually very easy to adapt ;-) - just start playing interactively ...
  • MJB wrote: »
    David Betz wrote: »
    idbruce wrote: »
    David Betz
    If it works what difference does it make what language it is written in? It seems to me that Tachyon is the most likely candidate for this because I believe it has the highest code density of any Propeller language.

    It makes a lot of difference to me, because of any code alterations that may be required to make it adaptable to another type of machine.
    Ah, I guess that makes sense. Can't just use it as a black box if you need to adapt it for a particular machine.
    Tachyon is actually very easy to adapt ;-) - just start playing interactively ...
    Probably true but you have to admit that there is a pretty steep learning curve. It's undoubtedly worth it given the capabilities you gain but I think it takes some effort to get there.
  • JasonDorieJasonDorie Posts: 1,930
    edited 2017-06-14 19:47
    Interpreting a useful subset of GCode is relatively trivial. The hard part is turning that into useful movement. That includes trajectory planning with look-ahead, acceleration / deceleration handling with different limits for each axis, and different step-per-inch units per axis. You need to allow for tolerances & slop when cornering with piece-wise linear segments. Then you have to execute the motion plan, paying attention to system inputs (like limits, and the pause or stop functions), and be able to resume after a pause. It gets complicated in a hurry.

    I feel like the prop would be well suited for executing the plan once you have one, but actually generating it is non-trivial. Setups like Grbl / TinyG / SmoothieBoard all do this by streaming from a host (or SD card) and doing lookahead on only as much as their buffer allows. On the older Arduino's that's not much, though the newer Due systems do a much better job because they have the horsepower.

    TinyG does jerk-controlled acceleration, whereas many of the boards are just constant acceleration (trapezoidal accel curve).
    https://github.com/synthetos/TinyG/wiki/Jerk-Controlled-Motion-Explained

    This is a great talk detailing some of the joy of implementing some of this:
    http://hackaday.com/2017/01/03/derek-schulte-path-planning-for-3d-printers/


    I think P2 could handle this as well (or better) than a single-core processor, as the planning and executing parts are inherently separate. Being able to run it as two different cores would be simpler than the interrupt-based approaches these other systems use, but it's still a complicated problem.
  • D.PD.P Posts: 790
    edited 2017-06-14 21:17
    DELETED, code not relavent





  • idbruceidbruce Posts: 6,197
    edited 2017-06-14 21:31
    Jason
    Interpreting a useful subset of GCode is relatively trivial. The hard part is turning that into useful movement. That includes trajectory planning with look-ahead, acceleration / deceleration handling with different limits for each axis, and different step-per-inch units per axis. You need to allow for tolerances & slop when cornering with piece-wise linear segments. Then you have to execute the motion plan, paying attention to system inputs (like limits, and the pause or stop functions), and be able to resume after a pause. It gets complicated in a hurry.

    Gee... You sure make it sound complicated ;)

    I agree 100%!

    Until a person starts sifting through all of the necessary code which makes up a CNC application, rich in features and abilities, they generally will not have a clue of all the finer details that go on behind the scenes. There is a lot of code, performing many different actions, to achieve precise movement, with the desired speed or a top speed which is determined by the process.

    Writing CNC code which contains interpolation is one thing, but writing CNC code which includes interpolation and ramping is a completely different story. The complexity level shoots through the roof, with all the different and various factors involved. And all of these factors must be considered when developing CNC software, to make it adaptable to various machines of the same category, such as 3D printing, as well as making it adaptable to other types of machinery such as a milling machine. Most of these various factors would be established within a configuration file, to set the various parameters during program execution or build, and then these parameters are dispersed and used throughout the software to make any given machine run properly. It all begins with system configuration and the physical attributes of a machine, from there, the software must be able to calculate the various physical differences, so that various mathematical formulas and algorithms may be applied, in various sections of code, to make a variety of machines run properly.

    Making CNC code adaptable to machines within the same category is hard enough, but then making it adaptable to various other types of machinery, once again takes software development to another complexity level.
  • jmgjmg Posts: 15,140
    JasonDorie wrote: »
    Interpreting a useful subset of GCode is relatively trivial. The hard part is turning that into useful movement. That includes trajectory planning with look-ahead, acceleration / deceleration handling with different limits for each axis, and different step-per-inch units per axis. You need to allow for tolerances & slop when cornering with piece-wise linear segments. Then you have to execute the motion plan, paying attention to system inputs (like limits, and the pause or stop functions), and be able to resume after a pause. It gets complicated in a hurry.

    I feel like the prop would be well suited for executing the plan once you have one, but actually generating it is non-trivial. Setups like Grbl / TinyG / SmoothieBoard all do this by streaming from a host (or SD card) and doing lookahead on only as much as their buffer allows....

    Yes, it does seem best to split this into two parts.

    There was another thread where playback of SD card was used or machine control, where the playback rate defined the time axis.
    That has the simplest possible end-point complexity, but is rather wasteful of storage as pauses and waits, consume space.

    Missing from default G-Codes seems to be time-stamps, adding those would allow a master side program, generate machine info as a series of way-points.
    With dX,dY,dT all being sent, you could have many axes co-operating with a Prop-friendly level of complexity.
    It does not seem hard to add codes for time-stamps ?

    For such storage, QuadSPI memory looks very cheap, and USB-QuadSPI bridge parts can simplify the update/download aspect.
    eg FT4222HQ-C-R is just $1.53, and QuadSPI memory can be 20~30c for megabyte storage
    Such a BOM is also easy to migrate P1 -> P2, as needed.




  • JasonDorie wrote: »
    Interpreting a useful subset of GCode is relatively trivial. The hard part is turning that into useful movement. That includes trajectory planning with look-ahead, acceleration / deceleration handling with different limits for each axis, and different step-per-inch units per axis. You need to allow for tolerances & slop when cornering with piece-wise linear segments. Then you have to execute the motion plan, paying attention to system inputs (like limits, and the pause or stop functions), and be able to resume after a pause. It gets complicated in a hurry.

    I feel like the prop would be well suited for executing the plan once you have one, but actually generating it is non-trivial. Setups like Grbl / TinyG / SmoothieBoard all do this by streaming from a host (or SD card) and doing lookahead on only as much as their buffer allows. On the older Arduino's that's not much, though the newer Due systems do a much better job because they have the horsepower.

    TinyG does jerk-controlled acceleration, whereas many of the boards are just constant acceleration (trapezoidal accel curve).
    https://github.com/synthetos/TinyG/wiki/Jerk-Controlled-Motion-Explained

    This is a great talk detailing some of the joy of implementing some of this:
    http://hackaday.com/2017/01/03/derek-schulte-path-planning-for-3d-printers/


    I think P2 could handle this as well (or better) than a single-core processor, as the planning and executing parts are inherently separate. Being able to run it as two different cores would be simpler than the interrupt-based approaches these other systems use, but it's still a complicated problem.

    Jason,
    You are one of the few people around here who know what they are talking about and who have the competency and drive to deliver projects.
    Do you think you can deliver a G Code interpreter similar to Grbl using Prop1?
  • JasonDorieJasonDorie Posts: 1,930
    edited 2017-06-16 18:26
    It's hard to say if it's even possible on P1 without getting into it. The parts that parse GCode and convert to machine coords aren't hard (I've done that bit for my laser project) but the path planning and execution is non-trivial. I didn't do anything for that, and I was already pushing the RAM limits of the P1 in LMM mode. CMM would be more conservative, but slower. You'd probably need to jump through a bunch of hoops to make it work, like using a combination of CMM and PASM, and it would not be simple to do. If the P1 had 128kb you could just write the whole thing in LMM and be done with it. You could code it all in Spin, but it'd be too slow to be useful.

    And I'm pretty sure there are a lot of other people who deliver too, they're just so busy doing it that you don't hear much about the results. :)
  • JasonDorie wrote: »
    It's hard to say if it's even possible on P1 without getting into it. The parts that parse GCode and convert to machine coords aren't hard (I've done that bit for my laser project) but the path planning and execution is non-trivial. I didn't do anything for that, and I was already pushing the RAM limits of the P1 in LMM mode. CMM would be more conservative, but slower. You'd probably need to jump through a bunch of hoops to make it work, like using a combination of CMM and PASM, and it would not be simple to do. If the P1 had 128kb you could just write the whole thing in LMM and be done with it. You could code it all in Spin, but it'd be too slow to be useful.

    And I'm pretty sure there are a lot of other people who deliver too, they're just so busy doing it that you don't hear much about the results. :)
    What if you "compile" the G-code into binary byte codes and write them to an SD card. Then you could get rid of the G-code compiler and just execute the byte codes off of the SD card when actually driving the device.
  • Executing the plan shouldn't be that hard - Most of what I've seen produces really short linear segments that are maybe a millisecond in duration. You basically have a set of direction values (1 bit per motor) and a step count to execute over that slice. Building up that plan is the hard bit - Most of them do it by scanning ahead by some number of steps and looking for hard stops or sharp corners, because those are your speed limiters, and then working backward from them. Part of the complexity is that if you have to limit yourself to a maximum speed based on how far ahead you can see, and you're constantly re-evaluating your path with each new GCode you get in - It's a lot of extra work because you can't just buffer the entire path.

    I'd suggest doing the plan on a PC and then just executing it on the Prop, but then you can't easily pause and re-start. You need the same logic to begin in the middle of a paused path as you do to plan the path in the first place. A chunk of the complexity of the Arduino based solutions is managing concurrency - They use interrupts to service the stepper motors and do weird things with interrupt timing to make it work, and the Prop wouldn't need to deal with that, but it's still hard to get right.
  • I think parsing G-Code while running is a waste of time on the P1 or even the P2. Actually we should call them G-Strings, because that is what they are.

    My Rapman printer seems to do that, the files on SD are binary.

    Going all the way to just saving pin states for the stepper driver (like in the other thread) is on the other end of wrong, Way to big files, way to less to do on the Prop.

    But maybe there is some intermediate state to use as file format. The Idea of time-stamping moves to synchronize multiple axis of freedom makes a lot of sense to me, when running 1 COG per axis and not all of the axis in a big, interrupt using loop like on other controllers.

    Sure we need G-Code as input, that is a widely used standard with tons of Machine specific variations. Then some program to preprocess the files, splitting axis movements in different files(?)/Areas in one file and time-stamping them. At that stage some ramping, ooze, path-optimization or tool change commands can be inserted/changed.

    Now save that as a binary file with numbers as longs and not ASCII-G-Strings, so more easy to handle on the propeller-side.

    It basically can still be (optimized?) G-Code aka 'Command'+'Number', but already translated to binary for faster access. For visualization it can be re-displayed as ascii if need for this occurs.

    This preprocessing program COULD run also on the Prop, but I guess it would better be used on the Machine producing the G-Code, because of available processing power.

    On the propeller side things are now getting smooth. Each Cog runs one axis, following the script by timecodes, checking bounds/errors, notifying some master cog of its progress/errors, smooth sailing along.

    We could have different PASM for stepper/servo/actuator/driver/encoder/endstop variations for each axis, running the same binary-coded movement script.

    To show and use the multi cores of P1 or P2 this should be the right approach. One COG per axis and preprocessed G-Strings.

    1 main Cog, 1 FSRW/Fat_Engine, 6 axis of freedom, or on the P2 even 14 of them?

    And yes, I think running from SD is way better then running from serial connection.

    While running it is standalone, you do not need constant connection. Also better when running jobs frequently, you can just select them and run.

    Enjoy!

    Mike








  • jmgjmg Posts: 15,140
    msrobots wrote: »
    ...
    And yes, I think running from SD is way better then running from serial connection.

    While running it is standalone, you do not need constant connection. Also better when running jobs frequently, you can just select them and run.
    SD is a useful option, but I think being able to also run from a serial link would help development too.
    SD makes sense for largest storage cases, and removable storage too, but there is also SPI for middle-sized storage...

    Checking into SPI FLASH, I see there is 64MBit at 71c/1k, 128MBit at $1.54/1k , 256MBit from $1.93/1k, 2GBit from $3.19/1k 4GBit ~ $5

  • evanhevanh Posts: 15,126
    msrobots wrote: »
    But maybe there is some intermediate state to use as file format. The Idea of time-stamping moves to synchronize multiple axis of freedom makes a lot of sense to me, when running 1 COG per axis and not all of the axis in a big, interrupt using loop like on other controllers.
    G-code can be as simplistic as you like. Just implement G01 and use that. Can't get much simpler than a collection of vectors.
  • Exactly, just a list of vectors, for each axis with a time sync to get this coordinated with one COG per axis.

    Maybe steal something from the MIDI format? That seems to be pretty timing stable to organize multiple instruments playing together.

    Not G-Strings but MIDI?

    Enjoy!

    Mike
  • evanhevanh Posts: 15,126
    Adding timing requirements to something that doesn't have timing requirements seems a tad unwise.
  • potatoheadpotatohead Posts: 10,253
    edited 2017-06-17 17:47
    The list I linked earlier represents a useful set of code, minus the spline curve stuff. That is spiffy, but rarely generated.

    The core transforms, relative, absolute, line and arc moves, speeds and feeds are all one needs. M code is specific to machine type and configuration.

    As for data, most programs are large. It is extremely common to buffer and drip feed to the controller. A Prop based system would only need to buffer a few lines tops to be effective and useful.

    Accepting an entire program to be processed and run from SD card would be fine, given an interactive option were possible. A downside would be software that reports on progress and or does visuals would need to know what code is running. That a major downside for a lot of people.

    Feed over serial makes a ton of sense actually.

    The bitrate does not need to be super high. There is software, for mill and 3D printing that runs on things like a Raspi that will drip feed all day long with no trouble.

    Given a small buffer, just accepting the gcode directly would have the broadest general utility and applicability. Off we the SD card cache as an option.

    The devices I'm working with will do that. And almost nobody does. They want to generate their gcode, use browser to access machine, upload and go.

    A Prop doing the motion and accepting gcode via serial (FTDI USB) combined with a pc or Pi would rock hard.

    The key thing is good motion and no b.s. gcode acceptance. Just take it bog standard. That is all people want and need. Toss all the super advanced stuff. It goes unused in nearly all cases, but for hand programming.

    Above all, no glitches. This is where a Prop would shine, along with superior motion.

    Do those and just take the code over serial and provide for a robust number of stops and sensors and it's a winner.




  • CAD/CAM packages often adapt the output code to the target controller. I use Vectric software that adapts to hundreds of different machines, using a small post-processor for each. The post-processor definition file describes what commands the controller understands, formatting details etc.
    Even so, very, very, few G/M commands are necessary for most machines. Moves, rapids, arcs, referencing ... probably 15 tops. I have no doubt that tachyon could handle it, but the acceleration and constant velocity motion control requires a lot of thought.

    The ability to restart, pause, & backup jobs is important in Mach3, and this requires a user interface or a host computer. Mach3's user interface has about 6 screens, graphical presentations of toolpaths, and probably 50-75 mouse/keyboard controls. Which comes back to T Chap's point.
  • Has any of you looked at this: http://obex.parallax.com/object/85.
    there is a thread about this aswell.
  • Jim FouchJim Fouch Posts: 395
    edited 2017-08-09 05:30
    I know this is not a G-Code interpreter, but I think it does show the power of what a P1 can do from the other end.

    I wrote this several years ago when I was trying to get a better understanding on how my 3D printer actually worked. I wanted to see how the actual speeds achieved compared to what speed you told the slicing software to have as a maximum. I needed to track EVERY movement all the X, Y, Z and extruders did. I did most all of it in PASM and it actually works quite well.

    One of the really cool things is it could easily be applied to almost any stepper controlled type CNC machine to show the exact position and speed at any time.



  • @Jim Fouch,

    nice!

    And gives me a brilliant plan to visualize my current g-code test project.

    Just take another quickstart and then I can live test my p1 against a P1 monitor. Why I hadn't thought of that one.

    thanks

    Mike
Sign In or Register to comment.