LOTS of coding done this weekend. Whew! Going to take an eye strain break I think. Good progress though.
Using Linux shared memory for IPC (Inter-Process Communication), I can now read the sensor data that is gathered by the sensor process in a separate process that will be the main robot implementation. So sensor data is updated at about 20 times per second. The sensor process tucks away an iterator value in the shared memory that indicates a loop count. The main process checks this value each time it updates data from the shared memory. If that value hangs then we know that we are not getting data anymore. All sensors are marked with qualification errors and we would plan for that to be time to stop and signal for help. We also note a higher qualification level for sensor data that has changed. Higher confidence that this sensor is actually working.
Here's a sample of the main process spitting out the sonar readings and proximity values it's getting from the sensor process:
The numbers to the far right are the iterator last byte value and the qualification level assigned to that sensor group ( 0 = unqaulified, 1 = iterator changed, 2 = value changed).
Here's what happens if the sensor process falls over, explodes into flames, and vaporizes while the main process is running:
After 1 second of seeing no iterator change the data is marked with a qualification error and the process can react as needed. Since that means we are now worse off than Helen Keller that reaction will be to halt right now.
Works the same if the main process is started without the sensor process being available. So now sensor data will always be available to the behaviors without needing to sprinkle calls to update into the Behavior.run() implementation. Behavior.run() will still need to be re-entrant and swift because the arbitrator still needs to have each behavior get a crack at taking control if needed.
Loop: 002900
Sonar Data : 040:021:013:251 57: Q[2]
Proximty Data : 01 57: Q[1]
Temperature Data: 75.3: 75.3 57: Q[2]
Behavior [IR_Aviod] has control!
Loop: 002950
Sonar Data : 040:022:013:251 61: Q[2]
Proximty Data : 01 61: Q[1]
Temperature Data: 75.3: 75.3 61: Q[2]
I have a skeleton for the behavior implementation. I took my inspiration from some the LeJOS implementation with some alterations.
Class Arbitrator:
- Has a Vector of Behavior objects.
- Start() loops through behaviors
---- Queries each Behavior object if wantsControl()
---- The first Behavior that responds with 1 then gets it's run().
Class Behavior:
- Has flag for isActive
- Has name
- pure virtual wantsControl()
- pure virtual run()
Class CustomBehavior:
- Constructed with access to sensor data structure.
- Implements wantsControl() to inspect sensors and other data to indicate if needs to take action, depending on run() implementation may suppress if already active.
- Implements run() to set isActive and proceed to execute the behavior code and return if short (could be launched in a thread if takes longer) and then clears isActive.
The Behavior class is abstract. Each custom Behavior inherits from it and does the custom work in wantsControl() and run() but can be very different otherwise. That keeps the Arbitrator simple. It knows nothing beyond the relative order/priority and executing the Behavior interface functions. Once I have my motor interface classes built out I should be able to get something going where it moves again and reacts to something.
Looking back now I have created a library of about 30 classes ranging from I2C / SPI, to Timers, to interfaces to Linux GPIO. That's both the blessing and curse of C++. Making it simple to say:
i2cbus.sendByte(bt)
Involves oodles of code and the more libraries and layers the more difficult it becomes to manage it all. i've become a faithful user of GIT. It's saved my butt a couple times now and the ability to go back to what worked a few days ago can be a life saver as well.
I'm going to power on ahead to get the skeleton behavior framework hung up and then it's time to work on some motor movement. Going to be really exciting to see it start moving again!
One of the things I got better at with the first incarnation of the bot was faking threaded-ness without the hassle of threads. The arbitrator model really benefits from threads but they are not exactly dead simple to do in the C++ language and even the libraries around them are not super friendly. Also that brings a new world of bugs and issues. One way I simulate this on a bot THAT DOES NOT HAVE TO DIRECTLY CONTROL MOTORS is to break a longer behavior into stages. Some code below describes it bit better but in general you have this to solve:
The avoid behavior needs to stop the robot and then back up away from the sensed obstacle. This means you must wait in three places. Once for the motors to stop, and a LONG wait again to back away until you no longer sense the obstacle, finally another short wait to stop again. Now your motor controller lets you fire and forget motor commands so awesome but if you return from the function and re-arbitrate then you're going to get called again because there is still an obstacle sensed, without some state tracking you'll be stopping constantly. With a thread you could execute all the steps in sequence using a flag of some kind to signal you are finished so the behavior will then stop wanting control. Without threads you either hog the CPU waiting to finish all the steps or allow run() to get called multiple times but only do the next step if the previous step is finished. Stages.
Here is the wantsControl() implementation. We want control if there is an IR Prox event OR if we are still in the middle of running from a previous event. We also do a little record keeping on what sensor caused this event. Later I want to make sure we notice if we have created another collision situation by trying to avoid this one. This is a simple avoid. We need to stop moving and then move away from the offending object.
inline int BehIRAvoid::wantsControl()
{
uint8_t proxBits = p_sensorDat->p_prox->bits;
// If there is a prox indication or if the behavior is already active
// then we want control.
if (proxBits != 0 or active != 0)
{
// if no previous trigger indication then store the pattern
// that triggered this event.
if (!triggerBits)
triggerBits = proxBits;
return 1;
}
// Otherwise we don't need control.
return 0;
}
Here's run(). We start off getting the latest prox data. Then if we were not active before, we set the active flag, advance the stage, and set a timer (for breaking out of a lock-up if needed later). Now we go on to execute stage 1. This would be a command to stop the motors and verify that they have in fact stopped usually quite quick. We advance the stage and return to allow another arbitration round. If we get control next round then we start moving away from the object and see if the proximity sensors clear. If not, we return and more arbitration rounds proceed. Each time if control returns to this behavior then we check the sensors and stop if the prox sensors show clear. So finally after a few hundred rounds that happens. Now we stop the motors and wait for them to really stop. Then we clear the active flag, clear the triggerBits, and set stage back to 0. Next round the bot is out of collision proximity and ready to do something else.
This prevents the behavior blocking arbitration during the long move away portion and allows a higher priority event to get through, like say a cliff sensor even while we are performing the current move.
inline int BehIRAvoid::run()
{
uint8_t proxBits = p_sensorDat->p_prox->bits;
if(!active)
{
active = 1;
stage = 1;
timer.set();
printf("Behavior [%s] has control\n", name.c_str());
printf(" STOP | MOVE AWAY\n | STOP");
}
if (stage == 1)
{
printf(" STG %d STOP\n", stage);
// TODO check that stop has occured.
stage++;
return 0;
}
else if (stage == 2)
{
printf(" STG %d MOVE AWAY\n", stage);
if (!proxBits)
{
printf(" STOP\n");
active = 0;
triggerBits = 0;
stage = 0;
return 0;
}
}
return 0;
}
This has been reasonably successful for me where the complexity doesn't overwhelm it and the stages are easy to divide up. There does come a point where it may be more advantageous to use a thread rather than this series of gated stages but I've been able to make it work for most simple behaviors. It keeps the robot very responsive to new events as they occur. One function I did not show is stop(). This basically is the way a higher priority event tells this behavior it's got to quit right now, reset and return. The arbitrator will always send stop to the previous behavior before running a new behavior.
Here's what the debug of this looks like:
robot@roboblack-3:~/development/pbotmain/pbotmain$ ./bin/pbotmain
Adding [IR_Avoid].
Arbitrator now contains 1 behaviors.
Adding [Reposition].
Arbitrator now contains 2 behaviors.
Adding [Sentry].
Arbitrator now contains 3 behaviors.
[Sentry] wants ctl.
Running [Sentry]
Behavior [Sentry] has control
Loop: 000050
Sonar Data : 025:012:013:251 22: Q[2]
Proximty Data : 00 22: Q[1]
Temperature Data: 76.8: 76.8 22: Q[2]
Loop: 000100
Sonar Data : 026:012:013:251 2C: Q[2]
Proximty Data : 00 2C: Q[1]
Temperature Data: 76.8: 76.8 2C: Q[2]
....
[IR_Avoid] wants ctl. <--------- Here we trip an IR prox sensor.
Stopping [Sentry]
Running [IR_Avoid]
Behavior [IR_Avoid] has control
STOP | MOVE AWAY | STOP
STG 1 STOP
[IR_Avoid] wants ctl. <--------- Here we simulate taking some time to back up. Each time we just check to see if conditions are met and if not we return.
Running [IR_Avoid]
STG 2 MOVE AWAY
[IR_Avoid] wants ctl.
Running [IR_Avoid]
STG 2 MOVE AWAY
[IR_Avoid] wants ctl.
Running [IR_Avoid]
STG 2 MOVE AWAY
...
Running [IR_Avoid]
STOP
[Sentry] wants ctl. <------- After hundreds of iterations, finally we got away and now the low priority behavior gets control back.
Stopping [IR_Avoid]
[IR_Avoid] STOP MOTORS
[IR_Avoid] Confirm Stop
Running [Sentry]
Behavior [Sentry] has control
Loop: 000400
Sonar Data : 036:003:014:251 69: Q[2]
Proximty Data : 00 69: Q[2]
Temperature Data: 76.8: 76.8 69: Q[2]
Hope that's somewhat helpful to others that might be interested in doing behavior based programs without fussing about with pthreads or boost libraries.
So I was able to get my L6470 drivers wired up. First issue found was I mixed up MISO MOSI on the pinout on my main board. So all the other signals were good just wasn't getting data back. Corrected that by swapping the far end pin sockets around and I was then getting correct results over the SPI bus. It was getting late last night but I wanted to at least see a wheel turn. So I snagged settings from the my original test using some different motors. Pulled those constants into my code and fired it up. Hmmm, that's odd..... the things shut down. Restart the test..... same result. The whole thing shuts down. Hmmm. Well, maybe the motor is drawing to much for this plug in power supply (yeah this where I make a big goof) I switch to the LiPO battery. Start the program and snap, sizzle, smoke! My heart skipped a beat but the system was still running. Just the one driver I had hooked up to the motor had a hole burned through the chip. Well, awesome.
So I ruled out any shorts or other obvious wiring faults. Then it hit me.... the settings passed to the chip were for motors with 1/3 the torque of these and FAR more coil resistance. In using the same params for these motros I had supplied over 8 times the current I should have been. Thus the release of the magic smoke. When I recalculated the values for the real motors and supplied those tp the remaining motor driver I was able to motion without trying to light the chip on fire. So I'll be needing to order a new driver board to replace the fried version before I can get some movement and I'll remember not to be so loose with the settings in the future. Clearly you can tell the chip to self destruct if you are not careful.
And, just like that... it's dead. I knew I was going to get bitten by the fact that I had reversed the power connector on the first main-board, but didn't attend to it that day. Well, it bit me. Accidentally grabbed a cable for the first board and used it on the second board. The BeagleBone did not care for having GND raised to +12V and it's 5V line now at -8.7V (3.3V). Poof goes a $40 board as well as the data contained in the onboard eMMC. Luckily I did have a backup of the device and config files so I can recover on the older BBB I have. Code should all be in GIT. I just have to see what else got nuked by that. It appears the main-board also died. None of my working BBB boards will boot on that main-board now.
Lucky I put together two. I've been able to confirm the second main-board is working enough to boot up. Power supply ADC is still functional and Propeller chip survived too.
So the only casualites appear to be the BeagleBone Black rev C board and the main-board. I've gotten it back online by using the backup image I made last month and pulling the code from GIT that was current as of Friday night. The rest of the equipment seems to have survived the event. I have purged all reversed connectors from my inventory to avoid a repeat of that event.
I'll avoid using the eMMC again while developing. A least if something like this happens again, keeping the whole system on an SD means it's no more difficult than swapping the SD into another board to get going again so long as the SD itself survives. I't really remarkable how tough some of these devices are. That event was abusive to say the least and most of the electronics survived it. I'm impressed with both the Arduino and the Propeller.
I like them pretty well. Not using them with any drive applied to them but they work very nicely and aren't horribly noisy just rolling around. I'm not thrilled with the way they look next to those nice shiny AL drive wheels.
Back to the shop a bit. I've not been terribly fond of the way the rear wheels looked since making the front aluminum wheels. The rear wheels have looked.... goofy ever since.
VEX now has some nicer 4" wheels than it used to that are designed to use a matching hub. I don't care for the black plastic look of the hub they offer so it was off to the drawing board.
The new hubs will blend very nicely with the rest of the robot and make a much nicer overall look to it.
Man, I have had about enough math tonight! I added rough odometry to the bot. Now it always had the ability to make very defined turns and so forth but had no idea of orientation at any particular moment in time. After watching the excellent video from David Anderson on his SR04 robot I really was impressed by his ability to reckon a position on a coordinate grid. It allowed some really neat behavior. I decided I really must have that and set out to learning how to calculate this data from encoders. I don't have real wheel encoders so I have to fake it a little right now and use the step counter in the chip for fake ticks.
First go seemed great for a while and then went to garbage in no time. The ODO values all went nuts after some period of driving and became nonsense. It took some time to catch it but the culprit was the counter in the L6470 is not 32 bits. It's 21 bits. At my mcrostep level and drive reduction that ran to about 21 wheel rotations and then the values rolled to negative. Ugh...... I've worked around this by reseting the chips counter each time the drives become idle. This means there is a maximum single motion of about 240" after which the bot MUST stop and allow the reset. The chips reject any attempt to reset the counter while in motion, even if the motion is not directly using them.
A reasonable workaround but kinda ugly too. It will never be as accurate as even his rough odometry but having some idea where you are, where something you to get to is and how to get there in space may be handy as I go forward.
Sample output from testing. Robot starts at (0,0) heading 0. In this case we just drove straight forward for basic testing but the X,Y coordinates do agree roughly with what I observe. I need to create a box driving behavior to test how well it matches the real world though.
Finished the outside of the new wheel hubs for ProwlerBot. I've also added a set of filter template classes to my library. So things like Moving Average, Leaky Integrator, Median, and others can be added to project with ease. All filters follow the model filter.in(samp) adds a new sample and returns the computed filtered value. filter.out() gets the value without providing input.
Been working on de-complexifying my ADC objects as well. They used to all provide voltage readings but I've now created a sensor type that gives voltage and reduced the ADC objects to simply returning the counts from the device. This seems like a cleaner implementation.
Have not made any forward progress on the design or programming of late. My home has decided to fall apart this winter so each week there begins a new project to fix some fresh problem that springs up. I've also jumped into the world of more serious CAM software for use in my shop. My University affiliation allowed me to get SprutCAM for next to nothing and I also got a nice deal on BobCAD/CAM for 3 seats and 3 Axis pro. Working my way through those products has been quite the vertical learning curve. I've been likeing BobCAM pretty well though. I was even able to try out some High Speed Machining tool-paths and it was impressive to see it ripping through AL buried down in the cut 3/8 inch. Cut the time on one part from 60min to 25min.
I'm still working on a fixture to produce my robot decks a little easier. Once completed I can start to finalize the mounting of electronics on the robot.
I did take ProwlerBot to my daughters school for a presentation to her class. They have parents come during lunch and present on a topic to the kids and my girl begged for me to bring the robot in. It went really well. Did a short presentation on Robots in general. Then did a personal demo of how to program an industrial robot vs an autonomous robot by having them program me to navigate the room. I had a blast, the kids would tell me to "walk forward" so off I go and I keep going right into the wall. You could see the light-bulbs turning on right away. After that I got "Walk forward 2 feet" and then "Turn left halfway". Then we talked about sensors and what I would need to navigate the room on my own. Demonstrated that by pretending to seek out light from the window. I had the kids build a quick SONAR sensor on a breadboard and that went better than I expected. 3 of the 4 worked and one failed due to a bad battery.
The star was Prowlerbot though. They were all very interested in how it could sense them and avoid them even if they moved. They spontaneously created a tunnel by standing behind each other, legs apart, to see if he could drive through and he did. They even exposed some bugs for me because he freaked out at some point after one kid was tripping every prox sensor on him continuously. It was really fun, and the kids seemed to enjoy the demo. Maybe one or two might take up the addiction later on. M.I.T's Genghis on the "Today Show" is what sparked the bug in me way back in the day.
Linux on the embeded stuff is killing me though. Seems like every time I get two steps forward interfacing to the hardware, they change everything around and I'm back to troubleshooting why it all is broken now. That's becoming a tiresome dance these days. I've been hoping to wait long enough to let the BeagleBone and it's distros settle down a bit so I can count on the OS not pulling the rug out from under me again.
Your robot is incredibly detailed! It is a piece of jewel!
I love all your aluminum work!
Look also at this Omni wheel. (Is the tail wheel that I use on my robot “The Artist”). I don’t know if the dimensions are suitable for your robot but the color and the entire combination of Aluminum and Black rubber I think is perfect for your design.
Taking into account your technical skills with aluminum I think you could make one of those Omni wheels by your own!
Thanks! I have to admit, I was quite pleased with how they came out myself. Now I just need to create the new single piece left/right trucks so I can put 'em to use.
So I've been working on the elimination of external micros wherever possible and I've made a good bit of headway in that department. I've begun to understand the BeagleBone PRUs well enough to begin taking a stab at some assembly language programming. The big win would be to take over the generation of the 8MHz motor clock from the Propeller. Since each instruction takes 5ns that gives me about 25 instructions to create a two-channel variable frequency clock that will max out at 8MHz. I think I can do it.
Without the need to fit a Propeller on the main board I would save some real-estate on the PCB and make several things easier. Namely adding an RTC chip and Battery and then perhaps more versatile I2C ADC. Another win would be taking over the operation of the sonar sensors. Right now that was handed to an Arduino on the I2C bus but the PRU could easily handle that task as well. That would eliminate another sub-controller and some complexity. TI now has a C compiler to generate PRU code as well which would allow me to do more than I'm capable of with my crappy assembly language skills.
I've also acquired a larger milling machine. A Weiss WM30. It's about 2 times the mass of my current mill and has a 1.5HP spindle. The previous owner already converted to CNC so I just need to add in plumbing for an automatic oil system and unfortunately I need to design a belt-drive for it as it's still a gear-head. Nice large work volume 22"x8.5"x14". I am also building a custom wood/fiberglass stand and pan for the machine and controls. I'll be looking to sell the smaller mill once this is all completed. Should give me *similar* capacity to a Tormach 770. So that project is competing for time with my robot, my kid, my home, my viola, my CCNA study, computers, my cabin site.... geeez. I think sometimes I have too much Smile to do.
It's been good and bad news. I have been able to write a successful PRU program to generate a clock however I can't come up with a flexible (or any really) way to generate two channels of different frequency. The loop becomes too large and the resolution for frequency becomes to course to be of much use. I know I don't want to drop a 32 pin MCU in for this either. It used up a lot of space on version one that I could use for functionality. So.... for now I'll just live with a single channel of 8MHz. That I can generate from the PRU with some fudging (52% duty cycle).
The Si5351 costs all of $1.20 and can generate nearly any frequency on 3 channels with very fine resolution and controlled from the I2C bus. Perhaps this is easier done in some hardware. I'm going to work up a breakout for it and plan to add it to my PCB. For now I can green-wire the 8MHz clock where it needs to go and pull the Propeller from the board.
The other bit of news is that I have been able to get the PRU on the BeagleBone's processor interfaced to a sonar sensor and take readings. With the code working now for a single sensor it should be easy to write up code for as many as 4 sensors. After that the pin counts get too difficult. The BeagleBone can't do on-the-fly input/output swapping on a pin so a pin is either an input or an output for the life of that boot-up. So a two pin sensor is preferable to a 1 pin sensor here.
To interface the disparate voltages (3.3 for the Beagle and 5.0 for sensors) I'll turn to the handy 74VHC244MX again. It's ability to accept 5V inputs with a 3.3V Vcc is great and requires no external components to accomplish. That is one weak link on the Beagle, the I/O pins are not terribly robust so I consider it wise to buffer them all if possible even if voltage translation is not required.
The sweep rate / sweep pattern / sweep enable can all be handled from the controlling program by simple memory writes to the correct locations where the PRU looks for new values after each sweep so communication with the sensors is very fast. The sweep results are written to memory where the controlling program can poll them at it's leisure.
One more external microcontroller eliminated from the design. Unfortunately that means a new PCB. To make this work in an ideal way I will need to do some rearranging of pins in the ProwlerBot cape design.
Comments
Using Linux shared memory for IPC (Inter-Process Communication), I can now read the sensor data that is gathered by the sensor process in a separate process that will be the main robot implementation. So sensor data is updated at about 20 times per second. The sensor process tucks away an iterator value in the shared memory that indicates a loop count. The main process checks this value each time it updates data from the shared memory. If that value hangs then we know that we are not getting data anymore. All sensors are marked with qualification errors and we would plan for that to be time to stop and signal for help. We also note a higher qualification level for sensor data that has changed. Higher confidence that this sensor is actually working.
Here's a sample of the main process spitting out the sonar readings and proximity values it's getting from the sensor process:
The numbers to the far right are the iterator last byte value and the qualification level assigned to that sensor group ( 0 = unqaulified, 1 = iterator changed, 2 = value changed).
Here's what happens if the sensor process falls over, explodes into flames, and vaporizes while the main process is running:
After 1 second of seeing no iterator change the data is marked with a qualification error and the process can react as needed. Since that means we are now worse off than Helen Keller that reaction will be to halt right now.
Works the same if the main process is started without the sensor process being available. So now sensor data will always be available to the behaviors without needing to sprinkle calls to update into the Behavior.run() implementation. Behavior.run() will still need to be re-entrant and swift because the arbitrator still needs to have each behavior get a crack at taking control if needed.
I have a skeleton for the behavior implementation. I took my inspiration from some the LeJOS implementation with some alterations.
Class Arbitrator:
- Has a Vector of Behavior objects.
- Start() loops through behaviors
---- Queries each Behavior object if wantsControl()
---- The first Behavior that responds with 1 then gets it's run().
Class Behavior:
- Has flag for isActive
- Has name
- pure virtual wantsControl()
- pure virtual run()
Class CustomBehavior:
- Constructed with access to sensor data structure.
- Implements wantsControl() to inspect sensors and other data to indicate if needs to take action, depending on run() implementation may suppress if already active.
- Implements run() to set isActive and proceed to execute the behavior code and return if short (could be launched in a thread if takes longer) and then clears isActive.
The Behavior class is abstract. Each custom Behavior inherits from it and does the custom work in wantsControl() and run() but can be very different otherwise. That keeps the Arbitrator simple. It knows nothing beyond the relative order/priority and executing the Behavior interface functions. Once I have my motor interface classes built out I should be able to get something going where it moves again and reacts to something.
Looking back now I have created a library of about 30 classes ranging from I2C / SPI, to Timers, to interfaces to Linux GPIO. That's both the blessing and curse of C++. Making it simple to say:
i2cbus.sendByte(bt)
Involves oodles of code and the more libraries and layers the more difficult it becomes to manage it all. i've become a faithful user of GIT. It's saved my butt a couple times now and the ability to go back to what worked a few days ago can be a life saver as well.
I'm going to power on ahead to get the skeleton behavior framework hung up and then it's time to work on some motor movement. Going to be really exciting to see it start moving again!
The avoid behavior needs to stop the robot and then back up away from the sensed obstacle. This means you must wait in three places. Once for the motors to stop, and a LONG wait again to back away until you no longer sense the obstacle, finally another short wait to stop again. Now your motor controller lets you fire and forget motor commands so awesome but if you return from the function and re-arbitrate then you're going to get called again because there is still an obstacle sensed, without some state tracking you'll be stopping constantly. With a thread you could execute all the steps in sequence using a flag of some kind to signal you are finished so the behavior will then stop wanting control. Without threads you either hog the CPU waiting to finish all the steps or allow run() to get called multiple times but only do the next step if the previous step is finished. Stages.
Here is the wantsControl() implementation. We want control if there is an IR Prox event OR if we are still in the middle of running from a previous event. We also do a little record keeping on what sensor caused this event. Later I want to make sure we notice if we have created another collision situation by trying to avoid this one. This is a simple avoid. We need to stop moving and then move away from the offending object.
Here's run(). We start off getting the latest prox data. Then if we were not active before, we set the active flag, advance the stage, and set a timer (for breaking out of a lock-up if needed later). Now we go on to execute stage 1. This would be a command to stop the motors and verify that they have in fact stopped usually quite quick. We advance the stage and return to allow another arbitration round. If we get control next round then we start moving away from the object and see if the proximity sensors clear. If not, we return and more arbitration rounds proceed. Each time if control returns to this behavior then we check the sensors and stop if the prox sensors show clear. So finally after a few hundred rounds that happens. Now we stop the motors and wait for them to really stop. Then we clear the active flag, clear the triggerBits, and set stage back to 0. Next round the bot is out of collision proximity and ready to do something else.
This prevents the behavior blocking arbitration during the long move away portion and allows a higher priority event to get through, like say a cliff sensor even while we are performing the current move.
This has been reasonably successful for me where the complexity doesn't overwhelm it and the stages are easy to divide up. There does come a point where it may be more advantageous to use a thread rather than this series of gated stages but I've been able to make it work for most simple behaviors. It keeps the robot very responsive to new events as they occur. One function I did not show is stop(). This basically is the way a higher priority event tells this behavior it's got to quit right now, reset and return. The arbitrator will always send stop to the previous behavior before running a new behavior.
Here's what the debug of this looks like:
Hope that's somewhat helpful to others that might be interested in doing behavior based programs without fussing about with pthreads or boost libraries.
So I was able to get my L6470 drivers wired up. First issue found was I mixed up MISO MOSI on the pinout on my main board. So all the other signals were good just wasn't getting data back. Corrected that by swapping the far end pin sockets around and I was then getting correct results over the SPI bus. It was getting late last night but I wanted to at least see a wheel turn. So I snagged settings from the my original test using some different motors. Pulled those constants into my code and fired it up. Hmmm, that's odd..... the things shut down. Restart the test..... same result. The whole thing shuts down. Hmmm. Well, maybe the motor is drawing to much for this plug in power supply (yeah this where I make a big goof) I switch to the LiPO battery. Start the program and snap, sizzle, smoke! My heart skipped a beat but the system was still running. Just the one driver I had hooked up to the motor had a hole burned through the chip. Well, awesome.
So I ruled out any shorts or other obvious wiring faults. Then it hit me.... the settings passed to the chip were for motors with 1/3 the torque of these and FAR more coil resistance. In using the same params for these motros I had supplied over 8 times the current I should have been. Thus the release of the magic smoke. When I recalculated the values for the real motors and supplied those tp the remaining motor driver I was able to motion without trying to light the chip on fire. So I'll be needing to order a new driver board to replace the fried version before I can get some movement and I'll remember not to be so loose with the settings in the future. Clearly you can tell the chip to self destruct if you are not careful.
SOON!!!!!
Lucky I put together two. I've been able to confirm the second main-board is working enough to boot up. Power supply ADC is still functional and Propeller chip survived too.
I'll avoid using the eMMC again while developing. A least if something like this happens again, keeping the whole system on an SD means it's no more difficult than swapping the SD into another board to get going again so long as the SD itself survives. I't really remarkable how tough some of these devices are. That event was abusive to say the least and most of the electronics survived it. I'm impressed with both the Arduino and the Propeller.
Are those VEX omnidirectional wheels? How do you like them?
I'm interested in using them in the EVE challenge for balancing on a ball.
Glad you find it useful. If you have any questions feel free to toss 'em at me. I'll do my best to give you a helpful answer.
VEX now has some nicer 4" wheels than it used to that are designed to use a matching hub. I don't care for the black plastic look of the hub they offer so it was off to the drawing board.
The new hubs will blend very nicely with the rest of the robot and make a much nicer overall look to it.
First go seemed great for a while and then went to garbage in no time. The ODO values all went nuts after some period of driving and became nonsense. It took some time to catch it but the culprit was the counter in the L6470 is not 32 bits. It's 21 bits. At my mcrostep level and drive reduction that ran to about 21 wheel rotations and then the values rolled to negative. Ugh...... I've worked around this by reseting the chips counter each time the drives become idle. This means there is a maximum single motion of about 240" after which the bot MUST stop and allow the reset. The chips reject any attempt to reset the counter while in motion, even if the motion is not directly using them.
A reasonable workaround but kinda ugly too. It will never be as accurate as even his rough odometry but having some idea where you are, where something you to get to is and how to get there in space may be handy as I go forward.
Sample output from testing. Robot starts at (0,0) heading 0. In this case we just drove straight forward for basic testing but the X,Y coordinates do agree roughly with what I observe. I need to create a box driving behavior to test how well it matches the real world though.
Here's a nice example after turning that the calculated heading seems to fit expectations from the move commanded along with tracking position in X/Y:
Been working on de-complexifying my ADC objects as well. They used to all provide voltage readings but I've now created a sensor type that gives voltage and reduced the ADC objects to simply returning the counts from the device. This seems like a cleaner implementation.
I'm still working on a fixture to produce my robot decks a little easier. Once completed I can start to finalize the mounting of electronics on the robot.
I did take ProwlerBot to my daughters school for a presentation to her class. They have parents come during lunch and present on a topic to the kids and my girl begged for me to bring the robot in. It went really well. Did a short presentation on Robots in general. Then did a personal demo of how to program an industrial robot vs an autonomous robot by having them program me to navigate the room. I had a blast, the kids would tell me to "walk forward" so off I go and I keep going right into the wall. You could see the light-bulbs turning on right away. After that I got "Walk forward 2 feet" and then "Turn left halfway". Then we talked about sensors and what I would need to navigate the room on my own. Demonstrated that by pretending to seek out light from the window. I had the kids build a quick SONAR sensor on a breadboard and that went better than I expected. 3 of the 4 worked and one failed due to a bad battery.
The star was Prowlerbot though. They were all very interested in how it could sense them and avoid them even if they moved. They spontaneously created a tunnel by standing behind each other, legs apart, to see if he could drive through and he did. They even exposed some bugs for me because he freaked out at some point after one kid was tripping every prox sensor on him continuously. It was really fun, and the kids seemed to enjoy the demo. Maybe one or two might take up the addiction later on. M.I.T's Genghis on the "Today Show" is what sparked the bug in me way back in the day.
Linux on the embeded stuff is killing me though. Seems like every time I get two steps forward interfacing to the hardware, they change everything around and I'm back to troubleshooting why it all is broken now. That's becoming a tiresome dance these days. I've been hoping to wait long enough to let the BeagleBone and it's distros settle down a bit so I can count on the OS not pulling the rug out from under me again.
I love all your aluminum work!
Look also at this Omni wheel. (Is the tail wheel that I use on my robot “The Artist”). I don’t know if the dimensions are suitable for your robot but the color and the entire combination of Aluminum and Black rubber I think is perfect for your design.
Taking into account your technical skills with aluminum I think you could make one of those Omni wheels by your own!
Great work!
Congratulations!
Thanks! I have to admit, I was quite pleased with how they came out myself. Now I just need to create the new single piece left/right trucks so I can put 'em to use.
Without the need to fit a Propeller on the main board I would save some real-estate on the PCB and make several things easier. Namely adding an RTC chip and Battery and then perhaps more versatile I2C ADC. Another win would be taking over the operation of the sonar sensors. Right now that was handed to an Arduino on the I2C bus but the PRU could easily handle that task as well. That would eliminate another sub-controller and some complexity. TI now has a C compiler to generate PRU code as well which would allow me to do more than I'm capable of with my crappy assembly language skills.
I've also acquired a larger milling machine. A Weiss WM30. It's about 2 times the mass of my current mill and has a 1.5HP spindle. The previous owner already converted to CNC so I just need to add in plumbing for an automatic oil system and unfortunately I need to design a belt-drive for it as it's still a gear-head. Nice large work volume 22"x8.5"x14". I am also building a custom wood/fiberglass stand and pan for the machine and controls. I'll be looking to sell the smaller mill once this is all completed. Should give me *similar* capacity to a Tormach 770. So that project is competing for time with my robot, my kid, my home, my viola, my CCNA study, computers, my cabin site.... geeez. I think sometimes I have too much Smile to do.
The Si5351 costs all of $1.20 and can generate nearly any frequency on 3 channels with very fine resolution and controlled from the I2C bus. Perhaps this is easier done in some hardware. I'm going to work up a breakout for it and plan to add it to my PCB. For now I can green-wire the 8MHz clock where it needs to go and pull the Propeller from the board.
To interface the disparate voltages (3.3 for the Beagle and 5.0 for sensors) I'll turn to the handy 74VHC244MX again. It's ability to accept 5V inputs with a 3.3V Vcc is great and requires no external components to accomplish. That is one weak link on the Beagle, the I/O pins are not terribly robust so I consider it wise to buffer them all if possible even if voltage translation is not required.
The sweep rate / sweep pattern / sweep enable can all be handled from the controlling program by simple memory writes to the correct locations where the PRU looks for new values after each sweep so communication with the sensors is very fast. The sweep results are written to memory where the controlling program can poll them at it's leisure.
One more external microcontroller eliminated from the design. Unfortunately that means a new PCB. To make this work in an ideal way I will need to do some rearranging of pins in the ProwlerBot cape design.
The robot is absolutely beautiful.