Nice work Duane! I am wondering if you are going to introduce a xbox control command as in Eddie robot, that would be great.
The code I posted should work with the original Eddie code on the PC.
I think the PC converted the input from the XBox controller to "GO" commands with the appropriate left and right power levels.
There are some game controller which can be connected directly to the Propeller but generally game controllers with an USB interface require a PC in order to communicate with the controller.
It wouldn't be very hard to add support for a Wii Nunchuck or PlayStation 2 controller to the software but then the overall structure to the program would probably need to be changed since the Eddie firmware is intended to be used as a slave controller to a PC.
I'm still trying to figure out what sort of strategy I should use when using this hardware without a PC. One method I've used is to use a second Propeller board as the master and use it to send commands to the board running the Eddie firmware. This works but it's kind of cumbersome. I'm inclined to think this code would need to be severely modified for use without a PC. Rather than using the main loop to monitor communication from the PC, the main loop should be analyzing sensor data and making decisions based on this data. The main loop should also work towards programmed goals and tasks. A Propeller only program would lend itself to using some sort of game controller as an input device.
BTW, for those following along, I think I have a workable hack to make the "POSITION" mode commands ("ARC", "TRVL" and "TURN") reach their final destinations rather than stopping a few encoder clicks short of their goals. I'm working on making a demo showing what sort of precision can be expected using this new code. I hope to post another video and the code later today.
The "ARC", "TRVL" and "TURN" commands should no longer stop a few encoder ticks from the target position.
I didn't add a full fledged integral component to the control algorithm. The error correction only occurs once the moving target position "midPosition" reaches the final target position "setPosition". Once these two targets agree with each other then the position error gets added to an integral variable which keeps increasing the power output until the wheel reaches its destination. It's not a very pretty fix but it does appear to work.
I haven't tested this new code on the traditional h-bridge hardware but I'm pretty sure it will work with either hardware configurations.
In an attempt to calibrate my robot, I had it drive forward five full revolutions (720 encoder ticks). I measured this distance to compute the distance travelled per encoder tick. This ended up being 3.357mm per encoder tick. I also measured the distance between the wheels of my robot in order to figure out the radius of the robot in units of encoder ticks. My robot has a radius of 60 encoder ticks. It's a good idea for any of you wanting to use this software to make these measurements yourself and to modify the following constants in the header object to match your robot.
BOT_RADIUS_E = 60
POSITIONS_PER_WHEEL_ROTATION = 144
POSITIONS_PER_ROTATION = 752 '376 * 2 '744
' "POSITIONS_PER_ROTATION" is the distance one wheel needs to travel to rotate the
' robot a full revolution while one wheel remains stationary.
The constant "POSITIONS_PER_ROTATION" ends up being twice the circumference of the robot or pi * BOT_RADIUS_E * 4.
I made another video with this lasted modification but the difference in performance wasn't noticeable. I'll wait until I have a video a bit more interesting before posting another one.
I hope some of you try this version of the code and let me know how it works for you. Please let me know which hardware configuration you're using when posting feedback.
I still haven't tested the ADC command with the version "B" code. I should have a compatible ADC chip wired up soon to test it. If any of you get a chance to test it, I'd appreciate knowing if it works correctly or not.
Edit: See post #36 for the latest version of the code. There is a small bug in this version of the code.
Edit(12/29/14) To reduce confusion about which code to use, I'm leaving only the latest versions attached to post #36.
The constant "POSITIONS_PER_ROTATION" ends up being twice the circumference of the robot or pi * BOT_RADIUS_E * 4.
It turns out the value of "BOT_RADIUS_E" isn't precise enough to use the above equation to compute "POSITIONS_PER_ROTATION".
I had my robot execute ten turn in a row to see how well the robot was aligned with the original direction after the ten rotations and I found my robot was off by about 30 degrees.
It took a couple of tries but I finally used the value 749 for "POSITIONS_PER_ROTATION" and the robot ended up pointing the same direction after ten rotations. The exact value one uses for "POSITIONS_PER_ROTATION" will probably vary among different robots. If anyone needs help getting their robot calibrated let me know and I'll post some code to rotate the robot ten times.
I've added a second deck (of sorts) to my robot and mounted a SF02 Laser Rangefinder to the top deck. The rangefinder is on a pan and tilt gizmo which I'm hoping to use to let the robot better sense its surroundings.
While this video certainly isn't an exciting one, it does illustrate the limits of using encoder feedback particularly when driving on carpet. The first rectangle looks good but things get progressively worse and time goes on. I'm still not sure what the robot was trying to do at the end (it was probably doing what I told it to do).
Edit(12/29/14) There's a more recent video of the latest firmware in action embedded in post #41.
I was using the following code to clear the "stillCnt" array.
longfill(@stillCnt, 0, 2)
stillCnt was a byte array so the longfill zeroed out unintended memory locations.
To allow great flexibility in timing the shutdown of motors from no movement, I decided to change "stillCnt" to a long sized array.
I've attached the code with the bug fix.
As usual the "B" version of the code is intended to be used with the Eddie Control Board. The "C" version is for use with a Propeller Activity Board and HB-25 motor controllers.
Edit(1/1/15):The code is now stored on my GitHub account. What I had been calling version "B" is now "Eddie.spin". "EddieActivityBoard.spin" is what was previously called version C. Both of these versions are available as archives.
See notes in post #38 about testing this code with a terminal. See additional information about calibrating your robot in post #42
I was just testing the files attached to post #36 and I've found the "B" version does not work.
I'll get this figured out and upload corrected files soon.
The version "EddieB141225d" I posted earlier does appear to work but it doesn't include the integral correction.
It takes me a bit of time to change from one hardware configuration to the other. I thought the changes I had made wouldn't be affected by the different hardware.
The "B" version of the code attached to post #36 has been tested. I'll need to switch the hardware on my robot before I can test the "C" version of the code.
I have noticed a small difference in the way the program behaves when using the HB-25 motor controllers and a normal h-bridge. When using the h-bridge the destination keeps getting over shot. I used a quick fix of adding a "TOO_SMALL_TO_FIX" constant ("DEADZONE" is already used). I think this overshoot could be fixed by changing the "kIntegralNumerator" value. If someone with h-bridge hardware gets around to tuning this value before I do, I hope you share your settings.
As mentioned earlier in this thread, the firmware can be tested with the Parallax Serial Terminal. If a terminal window is used to test the program, you'll want to type "watch 0" (or "kill 0") to turn off the watchdog feature. The default setting on the watchdog timer is one second. Setting the timer to zero disables it.
The program's default input and output is in hexadecimal but to make it easier to type commands from a terminal, the commands "decin 1" and "decout 1" will change the input and output to decimal format.
You can change the default input and output by changing the value of the "decInFlag" and "decOutFlag" variables in the top DAT section.
demoFlag byte 0 ' set to 255 or -1 to continuously run demo
' other non-zero values will instruct the
' program the number of times it should execute
' the method "ScriptedProgram".
debugFlag byte 0 'FULL_DEBUG + 1
decInFlag byte 0 ' set to 1 to use decimal input rather an hexadecimal
decOutFlag byte 0 ' set to 1 for decimal output rather an hexadecimal
The debugFlag will cause different amounts of information to be displayed depending on the value. A value of zero will only display the data returned from the commands entered.
The demoFlag will cause the "ScriptedProgram" method to run. I use this to test the various commands without the need of having a PC connected to the Propeller. The demoFlag may be set from the terminal window with the command "demo 1".
I have been following with envy. You have done so much in a small amount of time. With SPIN files date 12.25.2014, I hope your wife is not unpleased with your work on these days.
Unfortunately I do not have the products on hand to help you test. I wish I did. I have been working on ELEV8 V1 and ELEV8 V2 rebuilds. I have been sending Courtney and Ken my revisions to the V2 build manual and pictures, which she has incorporated. Still have a one or two to do.
You have done so much in a small amount of time. With SPIN files date 12.25.2014, I hope your wife is not unpleased with your work on these days.
All is well with my wife. We had a nice mellow Christmas this year. We didn't travel and we didn't have any guests. While it's fun to spend the holidays with family and friends, it's also nice to have a quiet Christmas at home.
I realized the program controlling the robot shown in the video embedded in post #35 wasn't giving the motor control cog enough time to make the final adjustments in the robot's position.
I added some code to the "ExecuteAndWait" method.
PRI ExecuteAndWait(pointer)
ExecuteStoredCommand(pointer)
waitcnt(clkfreq * 3 + cnt) ' wait for motors to get some speed
repeat while speedUsedByControlAlgorithm[0] or speedUsedByControlAlgorithm[1] or {
} ||gDifference[0] > Header#TOO_SMALL_TO_FIX or ||gDifference[1] > Header#TOO_SMALL_TO_FIX
' wait while motors are still turning and until the final destination is reached.
I think the "TOO_SMALL_TO_FIX" constant could be set to zero when using the HB-25 controllers but with a h-bridge, the lowest practical value on my robot was one.
Waiting until the destination was reached greatly improved the accuracy of the robot. Here's a video showing the robot driving the same course but using a pair of MC33926 h-bridge chips to power the motors. I think it did pretty well.
I think this same change will improve the performance of the robot when using the HB-25 hardware. I'll try to test the HB-25 version tomorrow.
As promised, here's a test with the HB-25 motor controllers.
With the MC33926 h-bridges, if I tried to limit the allowed error to zero, it could take several minutes for the correct position to be reached. With the HB-25 motor controllers, there wasn't a problem when I limited the allowed encoder error to zero.
I personally think these latest test have gone very well.
I've made some relatively major changes in this update. I reverted to the original encoder object. The encoder object I had been using was one I wrote and I think the object has the potential of greatly increasing the precision which the motors' speeds may be measured but I'm afraid there are likely bugs in my version. I'll work on my version of the encoder code separately for now.
The other major(ish) change is the way the demo program works. Rather than listing all the maneuvers in a method, the maneuvers may now be listed in the DAT section to be played back later.
Here's the maneuvers shown in the latest videos.
twoByOneMRectanglePlusTwo8s word @configDec
word @straightF2000mm, @leftTurn, @straightF1000mm, @leftTurn
word @straightF2000mm, @leftTurn, @straightF1000mm, @right180
word @straightF1000mm, @rightTurn, @straightF2000mm, @rightTurn
word @straightF1000mm, @rightTurn, @straightF2000mm, @left180
word @straightF1000mm, @leftTurn, @straightF500mm
word @rightCircleF299mm, @leftCircleF299mm
word @rightCircleF500mm, @leftCircleF500mm, 0
There is now a "PlayRoute" method which executed these maneuvers.
PRI PlayRoute(routePtr)
repeat while word[routePtr]
ExecuteAndWait(word[routePtr] + addressOffsetCorrection)
routePtr += 2
It wouldn't be hard to add a way of selecting which "route" the robot should execute from the terminal. For now the route's address is hardcoded into the program.
I'm attaching the latest code to post #36 since I've mentioned code being located in post #36 in several places on the forum. I'm attaching both "B" and "C" versions of the code but only the "C" version of this latest update has been tested.
There's also a bit of math involved when working with Arlo Bot and Eddie firmware. As Nikos points out, there are several dimensions of the robot which are important when making navigation calculations. The number of slots in the encoder disk is also very important. The number of slots in the encoder disk is generally pretty easy to figure out. Just count them (there are 36 slots in encoders used with the motor and wheel kit hardware). The dimensions such as the distance between the wheels and the diameter of the wheels are not so easily determined.
I've learned measuring these dimensions with a ruler or tape measure isn't a good way to of obtaining these dimensions. One really needs to measure these dimensions indirectly by measuring the resulting movement of the robot after given commands to travel or rotate.
As Ken mentioned in his original post the Eddie firmware includes a constant "POSITIONS_PER_ROTATION". This distance is related to the distance between the wheels of the robot (in encoder ticks). Here's the equation relating the value of the constant "POSITIONS_PER_ROTATION" and the distance between the wheels of the robot.
Distance Between Wheels (in encoder ticks) = POSITIONS_PER_ROTATION / (2 * pi)
This POSITIONS_PER_ROTATION value is important when making turns. When I tried to set the value of POSITIONS_PER_ROTATION based on directly measuring the distance between the wheels of the robot the turns were not as precise as I had wanted them. I decided to have the robot rotate in place ten times and adjust the POSITIONS_PER_ROTATION value until the robot's orientation after ten rotations was aligned with the original orientation.
After some trial and error, I came up with the value of 748 for the constant POSITIONS_PER_ROTATION. The original value was set to 744.
The "TURN" and "ARC" commands use the "POSITIONS_PER_ROTATION" value to calculate how far each wheel should travel.
As I mentioned I had the robot spin in place ten times to figure out the appropriate value of "POSITIONS_PER_ROTATION". I've included a "demo" to do this. Just set the variable "activeDemo" in the top DAT section to the constant "CAL_POS_PER_REV_DEMO". When the demoFlag is set to 1, the robot will spin in place 10 times. Adjust the value of "POSITIONS_PER_ROTATION" until the robot ends the ten rotations in the same orientation as it started. The robot may travel a bit to one side or the other but you're mainly interested in the final orientation of the robot not its final position.
If you want to plan your robot's course using units other than encoder ticks, you'll need to come up with a conversion factor between encoder ticks and some useful unit of length. I decided to use units of millimeters. One could attempt to calibrate this conversion factor by using the diameter of the wheel but again, I think you'll get better results if you measure the distance the robot travels to obtain the conversion between encoder ticks and millimeters.
I set the "activeDemo" to "CAL_DISTANCE_DEMO" and set the demoFlag to 1. The robot travelled 720 encoder ticks (5 wheel revolutions). I measured the distance travelled and divided it by 720 to obtain the number of millimeters per encoder tick. My robot travelled 2,455mm after five wheel rotations. This gave me a conversion factor of 3.410mm per encoder tick. Since the Eddie firmware uses integers, I multiplied this number by 1000 and used it as the value for the constant "MICROMETERS_PER_TICK".
CON
'' distance travelled per encoder tick = 2,455mm / 720 = 3.410mm
'' bot radius = 59.52 ticks
MICROMETERS_PER_TICK = 3410 '' User changeable
POSITIONS_PER_ROTATION = 748 ' 744 '' User changeable
' "POSITIONS_PER_ROTATION" is the distance one wheel needs to travel to rotate the
' robot a full revolution while one wheel remains stationary.
I got tired of calculating how many encoder ticks I should use when using the "TRVL" command so I added a "MM" command which behaves like the "TRVL" command but receives the distance in units of millimeters instead of encoder ticks. I also added an "ARCMM" which receives the radius of the arc in units of millimeters.
I've updated the code attached to post #36 once again. Besides the additional commands just mentioned, I also added a "PATH" command which behaves a lot like the TRVL command but uses a separate distance value for each wheel. This is basically an "ARC" command with an alternative set of input parameters. In the "ARC" command, the speed entered is applied to the center of the robot. The outer wheel will turn faster than the set speed and the inner wheel will turn slow than the set speed. With the "PATH" command, the speed parameter is applied to the faster of the two wheels and the slower wheel has its speed scaled down proportional to the short distance.
Now that I understand the code a bit better I was able to revisit the attempt to scale the acceleration of the slow wheel. I did this by using two separate acceleration variables. I sent my robot through its test course after making this change but the difference in performance wasn't noticeable.
I added the "Music" object back into the Activity Board version ("C") of the code. I realized the Eddie control board doesn't have an audio out jack so the "Music" object isn't very useful to the "B" version of the code. I moved the "Music" object to the Header object so once again both versions of code can use the same parent object.
As I mentioned in the edit to post #36, I haven't tested the latest version of code with the h-bridge hardware but none of the changes should behave differently when used with a h-bridge when compared with the HB-25 code.
Thanks, I'm pleased with the how well the robot is behaving so far. I do think the integral component of the PID algorithm could be improved. I'll probably keep working on this aspect of the program to see if I can add a correction to accumulating errors while robot is in route rather than just at its destination.
How are those motors on that lipo, could it roll around for at least ten minutes?
It's a 3S 5Ah LiPo. I'm sure it would last at least ten minutes. I'd think it's likely the pack would last a couple of hours with moderate driving.
I keep an alarm on my pack and when it goes off, I swap the pack for a freshly charged one. I generally change LiPo packs every few days. I haven't had the robot travel very far yet. Most of my tests are like the ones shown in the videos.
Does the caster cause much adverse steering that you see?
It doesn't appear the caster causes any issues when travelling forward. I haven't done much reverse testing but I'm pretty sure the robot wouldn't track as straight in reverse.
I've just made a major update to my GitHub account. I've renamed the various files removing the date stamp from the names. The date stamps in the file names was my own form of version control, but now that I'm using GitHub the data stamps are not needed and just adds clutter to repository.
I think it's going to take some time for me to get used to using GitHub.
Thank you Duane, the lipo looked smaller, I misjudged the size of the chassis I was guessing 2.2Ah. I am following this closely as I transition back to wheels. I think the Arlo base and those wheels are a good match for accurate odometry, my last tracked robot was not, and would chew through a 5Ah lipo quickly.
In order to monitor my progress on reducing the position error, I've decided to include code to control WS2812B LEDs. I also added three new commands to control the LEDs.
"BRT": <brightness> Set the brightness of the WS2812x LEDs. Range from zero (off) to
$FF (full brightness).
"COLOR": <channel><color in hexadecimal> Set the color of a single WS2812x LEDs. The range
for the channel parameter is from zero to MAX_LED_INDEX. The range for the color parameter
is zero to $FFFFFF. The color is received in hexadecimal notation even if the decInFlag
has been set. The colors will be adjusted by the value of "brightness" before being sent
to the WS2811 driver. The red value is the most significant byte. To set the first LED
to read use the command "COLOR 0 FF0000", to set the second LED green use "COLOR 1 FF00".
To set the third LED blue use "COLOR 2 FF".
"COLORS": <first channel><last channel><color in hexadecimal> Set the color of multiple
WS2812x LEDs. The range for the two channel parameters is from zero to MAX_LED_INDEX.
The range for the color parameter is the same for the command "COLOR".
To set the second through fifth LED white use the command "COLORS 1 4 FFFFFF". As with
the "COLOR" command the color parameter is received in hexadecimal notation even if the
decInFlag has been set.
The LED code has only been added to the "EddieAbExperimental.spin" file. The main programs don't include the LED code.
I'm not sure if the LED code should be added to primary versions. There is a free cog and PCs can't really directly control WS2812 LEDs so I'm not sure what's wrong with a bit more feature creep. I don't see how the LED code would negatively impact the performance of the program.
Any thoughts from Parallax about adding WS2812 support to the Eddie firmware?
My robot is smaller than an Arlo Bot but the distance between the wheels of my robot should be the same as the distance between the wheels of the Arlo Bot.
I hooked up ten WS2812B fun boards to my robot and had the LEDs display the number of encoder ticks the faster wheel was ahead of the slower wheel. When the left wheel is faster, the LEDs are red. When the right wheel is faster than the left wheel, the position difference is shown with green LEDs.
I had the robot run the course shown in the video embedded in post #41 (there are several videos showing the robot maneuver this same course). I watched the LEDs to see how much error there was as the robot drove around the rectangle and I initially thought I must not be using the right variables in my comparisons since the a single LED would flash red or green but there were rarely multiple LEDs lit while the robot was driving in a straight line.
After some experimenting, I found I could produce larger encoder errors by travelling faster or spinning in place.
Here's the long (and I'm afraid rather boring) video of my attempt to document the amount of encoder error while performing various maneuvers.
With the music object active and using a cog to run JonnyMac's WS2812 driver, I'm now using all 8 cogs of the Activity Board's Propeller. Some of the cogs could probably perform additional tasks so it would be very possible to add some additional features. The code used to control the robot in the video had 2,409 free longs.
I think there's enough error in the current control algorithm to warrant the addition of a proper integral component to the PID algorithm. I'm hopeful I'll make some progress toward this goal this weekend.
So far, I've only added the LED object to my "experimental" version of the code. I'm wondering if I should add the WS2812 support to the other versions of the code. The constants "LED_PIN" or "LEDS_IN_USE" could be used to indicate if the WS2812 driver should be started or not.
I'm hoping to hear other people's thoughts about the possibility of including WS2812 support in the Eddie firmware.
IMO, adding WS2812 support is a good idea. The Propeller can happily continuously send data to the WS2811 chips without the added feature interfering with the other features already present in the code. This is another example of why the Propeller makes a great microcontroller for robotic projects.
Not only does the Eddie firmware control the motors while monitoring the encoder feedback but it also can continuously refresh servos, monitor Ping sensors, play music and control LEDs all in the background and without slowing down the main program loop.
I think I've talked myself into adding the WS2812 support to the Eddie firmware. Unless I'm instructed otherwise, I'll likely do this.
As I've mentioned several times lately, I'm now using GitHub to store the Eddie firmware.
The archive "Eddie - Archive [Date ****].zip" contains the code to use with the original Eddie control board.
The archive "EddieActivityBoard - Archive [Date ****].zip" contains code to use with a Propeller Activity Board and two HB-25 motor controllers.
Hi Duane, I like where you are going with this and I think the led support is great for many uses. I am robot dumb but have some febel minded questions.
1) Have you tried your experiments on hard floors such as concrete?
2) Is perhaps more weight on the right motor then the left?
Sorry if this was discussed already, could have missed it!! Keep up the great work!
Hi Duane, I like where you are going with this and I think the led support is great for many uses. I am robot dumb but have some febel minded questions.
Thanks for the feedback. I'll probably add the LED support unless someone from Parallax instructs me otherwise. Of course it would be easy to have a fork of the main Eddie firmware which included the LED support so it doesn't matter much if it's included in the main version of the Eddie firmware.
I'm not one who thinks there's no such thing as a dumb question but I think your questions are spot on and I appreciate your asking them.
1) Have you tried your experiments on hard floors such as concrete?
Not yet. I think it's generally accepted robots track better on hard surfaces but the robot's ability to follow a pre programmed course isn't really the issue here. I think if the program demonstrated in post #41 were run with the robot on a hard surface, the robot would likely end up closer to the theoretical final destination.
My concern right now is how far ahead one wheel is than the other as the robot is moving.
I would like to test the robot on a hard floor but there's over six inches of snow outside and I don't really have a hard surface large enough to do much testing. I suppose I could have the robot drive around in the kitchen/dining area but there's not as much room in the kitchen as there is in the living room.
I'll probably try to figure out some sort of course the robot could drive in the kitchen/dining area.
2) Is perhaps more weight on the right motor then the left?
It's possible, but I doubt that's causing the difference in encoder readings. The two motors of the robot don't have identical characteristics. On the bench, with the wheels free to spin, the right motor draws a bit more current than the left motor. I think the differences in wheel position is more likely caused by the differences between motors than by the distribution of weight on the robot.
But even if one side of the robot were heavier than the other, the code should compensate for the differences the uneven weight would cause.
IMO, the code should be able to keep the robot driving straight if one wheel were on heavy shag carpet and one wheel on concrete. Now that you mention it, I should add extra weight to one side of the robot or the other during testing to make sure the code can adjust for these unsymmetrical loads.
Besides the error shown in post #48, there's often a bit of error as the robot rolls to a stop. In many of the videos the robot can be seen adjusting itself after coming to a stop in order to bring the encoder count into agreement with the goal count.
I'd really like to do something about the way the robot wiggles into its final position.
Ideally, I'd like the robot to behave (almost) as well as the Scribbler 2. After seeing NikosG's The Artist robot, I changed the design of my robot. I originally planned to have the top level of my robot held up with a "T" shaped support in the center of the robot. After seeing The Artist, I decided to keep the center area clear in case I wanted to copy NikosG's idea of using a larger robot to draw shapes.
I'm still trying to figure out what sort of strategy I should use when using this hardware without a PC. One method I've used is to use a second Propeller board as the master and use it to send commands to the board running the Eddie firmware. This works but it's kind of cumbersome. I'm inclined to think this code would need to be severely modified for use without a PC. Rather than using the main loop to monitor communication from the PC, the main loop should be analyzing sensor data and making decisions based on this data. The main loop should also work towards programmed goals and tasks. A Propeller only program would lend itself to using some sort of game controller as an input device.
Hello Duane,
You've made fantastic progress so far! I think the current direction of the Eddie Firmware is the way to go. Anyone that wants to alter the code for stand alone operation can always do that if they want. All the core routines needed to control the motors, etc will be there. About the only change I can think of would be to have the option when compiling to use the USB port or the xBee to send/receive commands/messages.
I think perhaps a good follow on Open Propeller Project would be a propeller program for a second controller to act as a host controller instead of a PC. That could be setup to leverage onw of the Wii controllers, Parallax Joystick, etc for manual robot control. If I recall correctly someone had a pretty cool Propeller powered joystick on Kickstarter a while back with an xBee and that could probably be set to output commands like the PC would and send them over the xBee to the robot.
With the current method of control the Eddie board acts like an intelligent peripheral with a well defined command protocol. As it stands now it can be connected and controlled by a PC, another Propeller, or even a Raspberry Pi.
I have my robot that uses this wheel kit in pieces to upgrade the electronics and encoders and once done will be able to get a chance to try your current code. Instead of a PC it has a Rasberry Pi that is going to send out the commands to the board running the new Eddie Firmware.
Anyone that wants to alter the code for stand alone operation can always do that if they want. All the core routines needed to control the motors, etc will be there. About the only change I can think of would be to have the option when compiling to use the USB port or the xBee to send/receive commands/messages.
It would be pretty easy to change which I/O pins were used for the control interface. If one could get by with a slower baud, it should be possible to keep the USB connection while adding XBee support. I'm not sure if the slower baud is really needed if only one port is active. I have had trouble using the four port object if two ports are actively using the com lines at 115200 bps. The serial object in use is a modified version of Tracy's four port driver. I plan to switch back to Tracy''s original object with the next revision to the code.
I think perhaps a good follow on Open Propeller Project would be a propeller program for a second controller to act as a host controller instead of a PC. That could be setup to leverage one of the Wii controllers, Parallax Joystick, etc for manual robot control.
I tried this. Earlier in my testing, I used a second Propeller as a master to the board running the Eddie firmware. While I'm sure this could be worked out, my initial tests were very unsatisfactory. There was a lot of delay between the control movements of my wireless Wii Nunchuck and movement of the robot. I'm sure part of the delay was caused by multiple debug statements in the same cog as the cog reading the Wii controller.
My present code for my own use, uses the Activity Board as the master Propeller board and a second Propeller board (presently a QuickStart) as a slave device. I've tried controlling my robot with the Wii Nunchuck with the Activity Board reading the joystick values directly and the robot was much easier to control.
The code I'm using for my personal use is in the "Cleaver" folder of my GitHub repository. So far the board running the "CleaverSlave.spin" object is mainly concerned with computing 3D locations based on the servo positions and range readings from the SF02. I haven't really integrated the two boards' efforts yet.
If I recall correctly someone had a pretty cool Propeller powered joystick on Kickstarter a while back with an xBee and that could probably be set to output commands like the PC would and send them over the xBee to the robot.
Paul K. ran the KickStarter on his Q2 and Q4 controllers. I have one of the Q4 controllers. I hope to use the Q4 with this robot sometime.
With the current method of control the Eddie board acts like an intelligent peripheral with a well defined command protocol. As it stands now it can be connected and controlled by a PC, another Propeller, or even a Raspberry Pi.
As the Eddie firmware currently works it could even be used as a slave to an Arduino. While this seems rather insulting to the Propeller, if it allowed someone comfortable programming the Arduino to make a better robot than would have been otherwise possible, I'm all for it.
Hopefully the Eddie firmware will make it relatively easy for anyone capable of programming a device to send serial output and to read serial input, able to control some very nice hardware even if they have limited programming experience.
One should be able to control the firmware with a Basic Stamp.
I have my robot that uses this wheel kit in pieces to upgrade the electronics and encoders and once done will be able to get a chance to try your current code. Instead of a PC it has a Rasberry Pi that is going to send out the commands to the board running the new Eddie Firmware.
I'm very anxious to learn how this goes. I'm very interested to hear your opinions of how the current motor control compares with the earlier motor control boards.
I want to try installing the controllers you sent me but it takes a bit of work to switch out the encoders. I'm not sure when I'll want to put that much effort into switching the encoders just to compare the difference in performance. I hope the current hardware with its 36 spokes out performs the earlier hardware with its 9 spokes,
Right now, I'm on a bit of a side track as I attempt to use the SF02 to detect obstacle heights.
I'm very anxious to learn how this goes. I'm very interested to hear your opinions of how the current motor control compares with the earlier motor control boards.
I want to try installing the controllers you sent me but it takes a bit of work to switch out the encoders. I'm not sure when I'll want to put that much effort into switching the encoders just to compare the difference in performance. I hope the current hardware with its 36 spokes out performs the earlier hardware with its 9 spokes.
One nice feature of the old position controllers is that they acted like a motor control co-processor and you could just hand off some of the movement tasks to them. Ideal for something like the BASIC Stamps. I believe the C source is available (at least it was) for the position controllers and maybe some of the logic can be pulled out and leveraged for the #8 Open Source Propeller project.
There are a couple downsides to the original position controllers vs the new ones. The resolution isn't as fine as the new quadrature encoders so the movements may not be as accurate as the new version. The other issue is that in most installations one of the position controllers needs to be set as reversed orientation. That is a soft setting and doesn't take effect until you send that command to the controller. I have mentioned this on the forums a while back:
My large robot has a relay to control power to the HB-25's and I keep that off until after the position controller has received the SREV command. Otherwise if you bump a wheel on the reversed side without sending the SREV first then that motor will start turning (to correct for it) but goes the wrong way and just takes off and keeps running. If you aren't prepared for this then your robot can run you over....
I still have a set of the original position controllers that I am keeping for testing as well.
I believe the C source is available (at least it was) for the position controllers and maybe some of the logic can be pulled out and leveraged for the #8 Open Source Propeller project.
Yes, I've found the source code for the Position Controllers a while back. I recently started translating this C code to Spin but not long into the translation process, I had repeated experiences of d
....
It would be nice, if by changing a few parameters, the code used to control the Arlo motors could also be used to control the 30:1 gear motors Parallax sells.
....
I
+1 Then I could use the code to control my Parallax Stingray Robot Chassis
+1 Then I could use the code to control my Parallax Stingray Robot Chassis
This is my intention. I have lots of motors with encoders and I'd really like some sort of universal object to control them. I think it's possible but when using encoders with lots of encoder transitions, it's not so important to calculate speed based on time between transitions.
I'm also concerned about the integer math involved. I'll likely need to add some sort of scaling correction for encoders with lots of transitions and encoders with very few transitions may also require the equations to be adjusted. I think I'm getting close to figuring this out.
Do you have encoders on your Stingray motors? I have some of the motors but I don't have encoders for them yet. Oh, or is this your hybrid robot with ActivityBot encoders? If so, you'll need to use a modified, single channel, version of the code (which I hope to write).
Oh, or is this your hybrid robot with ActivityBot encoders? If so, you'll need to use a modified, single channel, version of the code (which I hope to write).
After some experimenting, I found I could produce larger encoder errors by travelling faster or spinning in place.
I came here wondering if you had found a solution to this, because this is exactly what I find when using the ActivityBot->ArloBot C code that only uses "half" of the quadrature encoder.
I was wondering perhaps if using the full quad would help, but it sounds like the issues is deeper than that.
I came here wondering if you had found a solution to this, because this is exactly what I find when using the ActivityBot->ArloBot C code that only uses "half" of the quadrature encoder.
I was wondering perhaps if using the full quad would help, but it sounds like the issues is deeper than that.
I think it's extremely likely the current Eddie firmware could be improved. The main improvement would be to use a more precise speed measurement over a shorter sample period.
I'm pretty sure the small over/under shoots seen in the video attached to post #41 could be done away with.
I believe my current PWM/encoder object can measure speed very precisely and very quickly by computing the speed from either a single encoder pulse or a single encoder cycle. The PASM code currently gathers the data needed for this precise speed measurement but the Spin code doesn't make use of this data. I think using the precise speed data in an adjusted PID algorithm would improve the observed performance of the Arlo hardware.
I really don't understand the C code well enough to comment much about it. It seems logical that using both encoder channels would improve the performance of the robot.
And while there's room for improvement in the Eddie firmware, I'm reasonably pleased with how the code is working. As I mention in post #41, the robot moves to the exact encoder tick specified so the navigation ability of the robot can't really be improved without using other sensors. The main reason I would want to improve the code would be to use the robot like Niko's Artist robot. The small "searching" for the exact encoder tick which often occurs at the end of a maneuver doesn't cause a problem when navigating using encoders but it would interfere with a robot trying to draw smooth lines. It would be nice if the Arlo could be commanded as precisely as the Scribbler 2 (adjusted for encoder resolution).
I have several robot projects which use quadrature encoder feedback. Once I have my CNC router up and running, I'll likely continue work on getting my quadrature encoder code working better.
Just writing to provide some feedback. I install the Eddie program with your firmware and it works great. Thank you for all your effort.
Even though it's works fine in the direct mode, I couldn't made it works with the obstacle avoidance. In with pins should I connect the sensors? Any ideas?
Comments
The code I posted should work with the original Eddie code on the PC.
I think the PC converted the input from the XBox controller to "GO" commands with the appropriate left and right power levels.
There are some game controller which can be connected directly to the Propeller but generally game controllers with an USB interface require a PC in order to communicate with the controller.
It wouldn't be very hard to add support for a Wii Nunchuck or PlayStation 2 controller to the software but then the overall structure to the program would probably need to be changed since the Eddie firmware is intended to be used as a slave controller to a PC.
I'm still trying to figure out what sort of strategy I should use when using this hardware without a PC. One method I've used is to use a second Propeller board as the master and use it to send commands to the board running the Eddie firmware. This works but it's kind of cumbersome. I'm inclined to think this code would need to be severely modified for use without a PC. Rather than using the main loop to monitor communication from the PC, the main loop should be analyzing sensor data and making decisions based on this data. The main loop should also work towards programmed goals and tasks. A Propeller only program would lend itself to using some sort of game controller as an input device.
BTW, for those following along, I think I have a workable hack to make the "POSITION" mode commands ("ARC", "TRVL" and "TURN") reach their final destinations rather than stopping a few encoder clicks short of their goals. I'm working on making a demo showing what sort of precision can be expected using this new code. I hope to post another video and the code later today.
I didn't add a full fledged integral component to the control algorithm. The error correction only occurs once the moving target position "midPosition" reaches the final target position "setPosition". Once these two targets agree with each other then the position error gets added to an integral variable which keeps increasing the power output until the wheel reaches its destination. It's not a very pretty fix but it does appear to work.
I haven't tested this new code on the traditional h-bridge hardware but I'm pretty sure it will work with either hardware configurations.
In an attempt to calibrate my robot, I had it drive forward five full revolutions (720 encoder ticks). I measured this distance to compute the distance travelled per encoder tick. This ended up being 3.357mm per encoder tick. I also measured the distance between the wheels of my robot in order to figure out the radius of the robot in units of encoder ticks. My robot has a radius of 60 encoder ticks. It's a good idea for any of you wanting to use this software to make these measurements yourself and to modify the following constants in the header object to match your robot.
The constant "POSITIONS_PER_ROTATION" ends up being twice the circumference of the robot or pi * BOT_RADIUS_E * 4.
I made another video with this lasted modification but the difference in performance wasn't noticeable. I'll wait until I have a video a bit more interesting before posting another one.
I hope some of you try this version of the code and let me know how it works for you. Please let me know which hardware configuration you're using when posting feedback.
I still haven't tested the ADC command with the version "B" code. I should have a compatible ADC chip wired up soon to test it. If any of you get a chance to test it, I'd appreciate knowing if it works correctly or not.
Edit: See post #36 for the latest version of the code. There is a small bug in this version of the code.
Edit(12/29/14) To reduce confusion about which code to use, I'm leaving only the latest versions attached to post #36.
It turns out the value of "BOT_RADIUS_E" isn't precise enough to use the above equation to compute "POSITIONS_PER_ROTATION".
I had my robot execute ten turn in a row to see how well the robot was aligned with the original direction after the ten rotations and I found my robot was off by about 30 degrees.
It took a couple of tries but I finally used the value 749 for "POSITIONS_PER_ROTATION" and the robot ended up pointing the same direction after ten rotations. The exact value one uses for "POSITIONS_PER_ROTATION" will probably vary among different robots. If anyone needs help getting their robot calibrated let me know and I'll post some code to rotate the robot ten times.
I've added a second deck (of sorts) to my robot and mounted a SF02 Laser Rangefinder to the top deck. The rangefinder is on a pan and tilt gizmo which I'm hoping to use to let the robot better sense its surroundings.
While this video certainly isn't an exciting one, it does illustrate the limits of using encoder feedback particularly when driving on carpet. The first rectangle looks good but things get progressively worse and time goes on. I'm still not sure what the robot was trying to do at the end (it was probably doing what I told it to do).
Edit(12/29/14) There's a more recent video of the latest firmware in action embedded in post #41.
Edit(4/19/20): Fixed links.
I found a bug in the code.
I was using the following code to clear the "stillCnt" array.
stillCnt was a byte array so the longfill zeroed out unintended memory locations.
To allow great flexibility in timing the shutdown of motors from no movement, I decided to change "stillCnt" to a long sized array.
I've attached the code with the bug fix.
As usual the "B" version of the code is intended to be used with the Eddie Control Board. The "C" version is for use with a Propeller Activity Board and HB-25 motor controllers.
Edit(1/1/15):The code is now stored on my GitHub account. What I had been calling version "B" is now "Eddie.spin". "EddieActivityBoard.spin" is what was previously called version C. Both of these versions are available as archives.
See notes in post #38 about testing this code with a terminal. See additional information about calibrating your robot in post #42
I'll get this figured out and upload corrected files soon.
The version "EddieB141225d" I posted earlier does appear to work but it doesn't include the integral correction.
It takes me a bit of time to change from one hardware configuration to the other. I thought the changes I had made wouldn't be affected by the different hardware.
I have noticed a small difference in the way the program behaves when using the HB-25 motor controllers and a normal h-bridge. When using the h-bridge the destination keeps getting over shot. I used a quick fix of adding a "TOO_SMALL_TO_FIX" constant ("DEADZONE" is already used). I think this overshoot could be fixed by changing the "kIntegralNumerator" value. If someone with h-bridge hardware gets around to tuning this value before I do, I hope you share your settings.
As mentioned earlier in this thread, the firmware can be tested with the Parallax Serial Terminal. If a terminal window is used to test the program, you'll want to type "watch 0" (or "kill 0") to turn off the watchdog feature. The default setting on the watchdog timer is one second. Setting the timer to zero disables it.
The program's default input and output is in hexadecimal but to make it easier to type commands from a terminal, the commands "decin 1" and "decout 1" will change the input and output to decimal format.
You can change the default input and output by changing the value of the "decInFlag" and "decOutFlag" variables in the top DAT section.
The debugFlag will cause different amounts of information to be displayed depending on the value. A value of zero will only display the data returned from the commands entered.
The demoFlag will cause the "ScriptedProgram" method to run. I use this to test the various commands without the need of having a PC connected to the Propeller. The demoFlag may be set from the terminal window with the command "demo 1".
I have been following with envy. You have done so much in a small amount of time. With SPIN files date 12.25.2014, I hope your wife is not unpleased with your work on these days.
Unfortunately I do not have the products on hand to help you test. I wish I did. I have been working on ELEV8 V1 and ELEV8 V2 rebuilds. I have been sending Courtney and Ken my revisions to the V2 build manual and pictures, which she has incorporated. Still have a one or two to do.
Hope to purchase the equipment very soon.
Jim
All is well with my wife. We had a nice mellow Christmas this year. We didn't travel and we didn't have any guests. While it's fun to spend the holidays with family and friends, it's also nice to have a quiet Christmas at home.
I realized the program controlling the robot shown in the video embedded in post #35 wasn't giving the motor control cog enough time to make the final adjustments in the robot's position.
I added some code to the "ExecuteAndWait" method.
I think the "TOO_SMALL_TO_FIX" constant could be set to zero when using the HB-25 controllers but with a h-bridge, the lowest practical value on my robot was one.
Waiting until the destination was reached greatly improved the accuracy of the robot. Here's a video showing the robot driving the same course but using a pair of MC33926 h-bridge chips to power the motors. I think it did pretty well.
I think this same change will improve the performance of the robot when using the HB-25 hardware. I'll try to test the HB-25 version tomorrow.
As promised, here's a test with the HB-25 motor controllers.
With the MC33926 h-bridges, if I tried to limit the allowed error to zero, it could take several minutes for the correct position to be reached. With the HB-25 motor controllers, there wasn't a problem when I limited the allowed encoder error to zero.
I personally think these latest test have gone very well.
I've made some relatively major changes in this update. I reverted to the original encoder object. The encoder object I had been using was one I wrote and I think the object has the potential of greatly increasing the precision which the motors' speeds may be measured but I'm afraid there are likely bugs in my version. I'll work on my version of the encoder code separately for now.
The other major(ish) change is the way the demo program works. Rather than listing all the maneuvers in a method, the maneuvers may now be listed in the DAT section to be played back later.
Here's the maneuvers shown in the latest videos.
There is now a "PlayRoute" method which executed these maneuvers.
It wouldn't be hard to add a way of selecting which "route" the robot should execute from the terminal. For now the route's address is hardcoded into the program.
I'm attaching the latest code to post #36 since I've mentioned code being located in post #36 in several places on the forum. I'm attaching both "B" and "C" versions of the code but only the "C" version of this latest update has been tested.
In the thread Nikos goes into a lot of detail about the math involved in getting the robot to travel the desired course. He has some great graphics showing the various dimensions of some of the Parallax robots.
There's also a bit of math involved when working with Arlo Bot and Eddie firmware. As Nikos points out, there are several dimensions of the robot which are important when making navigation calculations. The number of slots in the encoder disk is also very important. The number of slots in the encoder disk is generally pretty easy to figure out. Just count them (there are 36 slots in encoders used with the motor and wheel kit hardware). The dimensions such as the distance between the wheels and the diameter of the wheels are not so easily determined.
I've learned measuring these dimensions with a ruler or tape measure isn't a good way to of obtaining these dimensions. One really needs to measure these dimensions indirectly by measuring the resulting movement of the robot after given commands to travel or rotate.
As Ken mentioned in his original post the Eddie firmware includes a constant "POSITIONS_PER_ROTATION". This distance is related to the distance between the wheels of the robot (in encoder ticks). Here's the equation relating the value of the constant "POSITIONS_PER_ROTATION" and the distance between the wheels of the robot.
Distance Between Wheels (in encoder ticks) = POSITIONS_PER_ROTATION / (2 * pi)
This POSITIONS_PER_ROTATION value is important when making turns. When I tried to set the value of POSITIONS_PER_ROTATION based on directly measuring the distance between the wheels of the robot the turns were not as precise as I had wanted them. I decided to have the robot rotate in place ten times and adjust the POSITIONS_PER_ROTATION value until the robot's orientation after ten rotations was aligned with the original orientation.
After some trial and error, I came up with the value of 748 for the constant POSITIONS_PER_ROTATION. The original value was set to 744.
The "TURN" and "ARC" commands use the "POSITIONS_PER_ROTATION" value to calculate how far each wheel should travel.
As I mentioned I had the robot spin in place ten times to figure out the appropriate value of "POSITIONS_PER_ROTATION". I've included a "demo" to do this. Just set the variable "activeDemo" in the top DAT section to the constant "CAL_POS_PER_REV_DEMO". When the demoFlag is set to 1, the robot will spin in place 10 times. Adjust the value of "POSITIONS_PER_ROTATION" until the robot ends the ten rotations in the same orientation as it started. The robot may travel a bit to one side or the other but you're mainly interested in the final orientation of the robot not its final position.
If you want to plan your robot's course using units other than encoder ticks, you'll need to come up with a conversion factor between encoder ticks and some useful unit of length. I decided to use units of millimeters. One could attempt to calibrate this conversion factor by using the diameter of the wheel but again, I think you'll get better results if you measure the distance the robot travels to obtain the conversion between encoder ticks and millimeters.
I set the "activeDemo" to "CAL_DISTANCE_DEMO" and set the demoFlag to 1. The robot travelled 720 encoder ticks (5 wheel revolutions). I measured the distance travelled and divided it by 720 to obtain the number of millimeters per encoder tick. My robot travelled 2,455mm after five wheel rotations. This gave me a conversion factor of 3.410mm per encoder tick. Since the Eddie firmware uses integers, I multiplied this number by 1000 and used it as the value for the constant "MICROMETERS_PER_TICK".
I got tired of calculating how many encoder ticks I should use when using the "TRVL" command so I added a "MM" command which behaves like the "TRVL" command but receives the distance in units of millimeters instead of encoder ticks. I also added an "ARCMM" which receives the radius of the arc in units of millimeters.
I've updated the code attached to post #36 once again. Besides the additional commands just mentioned, I also added a "PATH" command which behaves a lot like the TRVL command but uses a separate distance value for each wheel. This is basically an "ARC" command with an alternative set of input parameters. In the "ARC" command, the speed entered is applied to the center of the robot. The outer wheel will turn faster than the set speed and the inner wheel will turn slow than the set speed. With the "PATH" command, the speed parameter is applied to the faster of the two wheels and the slower wheel has its speed scaled down proportional to the short distance.
Now that I understand the code a bit better I was able to revisit the attempt to scale the acceleration of the slow wheel. I did this by using two separate acceleration variables. I sent my robot through its test course after making this change but the difference in performance wasn't noticeable.
I added the "Music" object back into the Activity Board version ("C") of the code. I realized the Eddie control board doesn't have an audio out jack so the "Music" object isn't very useful to the "B" version of the code. I moved the "Music" object to the Header object so once again both versions of code can use the same parent object.
As I mentioned in the edit to post #36, I haven't tested the latest version of code with the h-bridge hardware but none of the changes should behave differently when used with a h-bridge when compared with the HB-25 code.
I'm hoping some of you will try this out.
I haven't tested it with ROS 4. I tried to keep things backwards compatible. Please let me know if there are any problems.
Does the caster cause much adverse steering that you see?
I've given up on tracks and would like to get some wheels. Thanks.
Thanks, I'm pleased with the how well the robot is behaving so far. I do think the integral component of the PID algorithm could be improved. I'll probably keep working on this aspect of the program to see if I can add a correction to accumulating errors while robot is in route rather than just at its destination.
It's a 3S 5Ah LiPo. I'm sure it would last at least ten minutes. I'd think it's likely the pack would last a couple of hours with moderate driving.
I keep an alarm on my pack and when it goes off, I swap the pack for a freshly charged one. I generally change LiPo packs every few days. I haven't had the robot travel very far yet. Most of my tests are like the ones shown in the videos.
It doesn't appear the caster causes any issues when travelling forward. I haven't done much reverse testing but I'm pretty sure the robot wouldn't track as straight in reverse.
I've just made a major update to my GitHub account. I've renamed the various files removing the date stamp from the names. The date stamps in the file names was my own form of version control, but now that I'm using GitHub the data stamps are not needed and just adds clutter to repository.
I think it's going to take some time for me to get used to using GitHub.
The LED code has only been added to the "EddieAbExperimental.spin" file. The main programs don't include the LED code.
I'm not sure if the LED code should be added to primary versions. There is a free cog and PCs can't really directly control WS2812 LEDs so I'm not sure what's wrong with a bit more feature creep. I don't see how the LED code would negatively impact the performance of the program.
Any thoughts from Parallax about adding WS2812 support to the Eddie firmware?
My robot is smaller than an Arlo Bot but the distance between the wheels of my robot should be the same as the distance between the wheels of the Arlo Bot.
I hooked up ten WS2812B fun boards to my robot and had the LEDs display the number of encoder ticks the faster wheel was ahead of the slower wheel. When the left wheel is faster, the LEDs are red. When the right wheel is faster than the left wheel, the position difference is shown with green LEDs.
I had the robot run the course shown in the video embedded in post #41 (there are several videos showing the robot maneuver this same course). I watched the LEDs to see how much error there was as the robot drove around the rectangle and I initially thought I must not be using the right variables in my comparisons since the a single LED would flash red or green but there were rarely multiple LEDs lit while the robot was driving in a straight line.
After some experimenting, I found I could produce larger encoder errors by travelling faster or spinning in place.
Here's the long (and I'm afraid rather boring) video of my attempt to document the amount of encoder error while performing various maneuvers.
With the music object active and using a cog to run JonnyMac's WS2812 driver, I'm now using all 8 cogs of the Activity Board's Propeller. Some of the cogs could probably perform additional tasks so it would be very possible to add some additional features. The code used to control the robot in the video had 2,409 free longs.
I think there's enough error in the current control algorithm to warrant the addition of a proper integral component to the PID algorithm. I'm hopeful I'll make some progress toward this goal this weekend.
So far, I've only added the LED object to my "experimental" version of the code. I'm wondering if I should add the WS2812 support to the other versions of the code. The constants "LED_PIN" or "LEDS_IN_USE" could be used to indicate if the WS2812 driver should be started or not.
I'm hoping to hear other people's thoughts about the possibility of including WS2812 support in the Eddie firmware.
IMO, adding WS2812 support is a good idea. The Propeller can happily continuously send data to the WS2811 chips without the added feature interfering with the other features already present in the code. This is another example of why the Propeller makes a great microcontroller for robotic projects.
Not only does the Eddie firmware control the motors while monitoring the encoder feedback but it also can continuously refresh servos, monitor Ping sensors, play music and control LEDs all in the background and without slowing down the main program loop.
I think I've talked myself into adding the WS2812 support to the Eddie firmware. Unless I'm instructed otherwise, I'll likely do this.
As I've mentioned several times lately, I'm now using GitHub to store the Eddie firmware.
https://github.com/ddegn/EddieFirmware
The archive "Eddie - Archive [Date ****].zip" contains the code to use with the original Eddie control board.
The archive "EddieActivityBoard - Archive [Date ****].zip" contains code to use with a Propeller Activity Board and two HB-25 motor controllers.
1) Have you tried your experiments on hard floors such as concrete?
2) Is perhaps more weight on the right motor then the left?
Sorry if this was discussed already, could have missed it!! Keep up the great work!
Thanks for the feedback. I'll probably add the LED support unless someone from Parallax instructs me otherwise. Of course it would be easy to have a fork of the main Eddie firmware which included the LED support so it doesn't matter much if it's included in the main version of the Eddie firmware.
I'm not one who thinks there's no such thing as a dumb question but I think your questions are spot on and I appreciate your asking them.
Not yet. I think it's generally accepted robots track better on hard surfaces but the robot's ability to follow a pre programmed course isn't really the issue here. I think if the program demonstrated in post #41 were run with the robot on a hard surface, the robot would likely end up closer to the theoretical final destination.
My concern right now is how far ahead one wheel is than the other as the robot is moving.
I would like to test the robot on a hard floor but there's over six inches of snow outside and I don't really have a hard surface large enough to do much testing. I suppose I could have the robot drive around in the kitchen/dining area but there's not as much room in the kitchen as there is in the living room.
I'll probably try to figure out some sort of course the robot could drive in the kitchen/dining area.
It's possible, but I doubt that's causing the difference in encoder readings. The two motors of the robot don't have identical characteristics. On the bench, with the wheels free to spin, the right motor draws a bit more current than the left motor. I think the differences in wheel position is more likely caused by the differences between motors than by the distribution of weight on the robot.
But even if one side of the robot were heavier than the other, the code should compensate for the differences the uneven weight would cause.
IMO, the code should be able to keep the robot driving straight if one wheel were on heavy shag carpet and one wheel on concrete. Now that you mention it, I should add extra weight to one side of the robot or the other during testing to make sure the code can adjust for these unsymmetrical loads.
I don't think these issue have been discussed. Thanks for bringing them up. And thanks for the kind words.
Besides the error shown in post #48, there's often a bit of error as the robot rolls to a stop. In many of the videos the robot can be seen adjusting itself after coming to a stop in order to bring the encoder count into agreement with the goal count.
I'd really like to do something about the way the robot wiggles into its final position.
Ideally, I'd like the robot to behave (almost) as well as the Scribbler 2. After seeing NikosG's The Artist robot, I changed the design of my robot. I originally planned to have the top level of my robot held up with a "T" shaped support in the center of the robot. After seeing The Artist, I decided to keep the center area clear in case I wanted to copy NikosG's idea of using a larger robot to draw shapes.
Hello Duane,
You've made fantastic progress so far! I think the current direction of the Eddie Firmware is the way to go. Anyone that wants to alter the code for stand alone operation can always do that if they want. All the core routines needed to control the motors, etc will be there. About the only change I can think of would be to have the option when compiling to use the USB port or the xBee to send/receive commands/messages.
I think perhaps a good follow on Open Propeller Project would be a propeller program for a second controller to act as a host controller instead of a PC. That could be setup to leverage onw of the Wii controllers, Parallax Joystick, etc for manual robot control. If I recall correctly someone had a pretty cool Propeller powered joystick on Kickstarter a while back with an xBee and that could probably be set to output commands like the PC would and send them over the xBee to the robot.
With the current method of control the Eddie board acts like an intelligent peripheral with a well defined command protocol. As it stands now it can be connected and controlled by a PC, another Propeller, or even a Raspberry Pi.
I have my robot that uses this wheel kit in pieces to upgrade the electronics and encoders and once done will be able to get a chance to try your current code. Instead of a PC it has a Rasberry Pi that is going to send out the commands to the board running the new Eddie Firmware.
Robert
It would be pretty easy to change which I/O pins were used for the control interface. If one could get by with a slower baud, it should be possible to keep the USB connection while adding XBee support. I'm not sure if the slower baud is really needed if only one port is active. I have had trouble using the four port object if two ports are actively using the com lines at 115200 bps. The serial object in use is a modified version of Tracy's four port driver. I plan to switch back to Tracy''s original object with the next revision to the code.
I tried this. Earlier in my testing, I used a second Propeller as a master to the board running the Eddie firmware. While I'm sure this could be worked out, my initial tests were very unsatisfactory. There was a lot of delay between the control movements of my wireless Wii Nunchuck and movement of the robot. I'm sure part of the delay was caused by multiple debug statements in the same cog as the cog reading the Wii controller.
My present code for my own use, uses the Activity Board as the master Propeller board and a second Propeller board (presently a QuickStart) as a slave device. I've tried controlling my robot with the Wii Nunchuck with the Activity Board reading the joystick values directly and the robot was much easier to control.
The code I'm using for my personal use is in the "Cleaver" folder of my GitHub repository. So far the board running the "CleaverSlave.spin" object is mainly concerned with computing 3D locations based on the servo positions and range readings from the SF02. I haven't really integrated the two boards' efforts yet.
Paul K. ran the KickStarter on his Q2 and Q4 controllers. I have one of the Q4 controllers. I hope to use the Q4 with this robot sometime.
As the Eddie firmware currently works it could even be used as a slave to an Arduino. While this seems rather insulting to the Propeller, if it allowed someone comfortable programming the Arduino to make a better robot than would have been otherwise possible, I'm all for it.
Hopefully the Eddie firmware will make it relatively easy for anyone capable of programming a device to send serial output and to read serial input, able to control some very nice hardware even if they have limited programming experience.
One should be able to control the firmware with a Basic Stamp.
I'm very anxious to learn how this goes. I'm very interested to hear your opinions of how the current motor control compares with the earlier motor control boards.
I want to try installing the controllers you sent me but it takes a bit of work to switch out the encoders. I'm not sure when I'll want to put that much effort into switching the encoders just to compare the difference in performance. I hope the current hardware with its 36 spokes out performs the earlier hardware with its 9 spokes,
Right now, I'm on a bit of a side track as I attempt to use the SF02 to detect obstacle heights.
One nice feature of the old position controllers is that they acted like a motor control co-processor and you could just hand off some of the movement tasks to them. Ideal for something like the BASIC Stamps. I believe the C source is available (at least it was) for the position controllers and maybe some of the logic can be pulled out and leveraged for the #8 Open Source Propeller project.
There are a couple downsides to the original position controllers vs the new ones. The resolution isn't as fine as the new quadrature encoders so the movements may not be as accurate as the new version. The other issue is that in most installations one of the position controllers needs to be set as reversed orientation. That is a soft setting and doesn't take effect until you send that command to the controller. I have mentioned this on the forums a while back:
http://forums.parallax.com/showthread.php/125471-Possible-revision-for-27906-position-controller?highlight=position
My large robot has a relay to control power to the HB-25's and I keep that off until after the position controller has received the SREV command. Otherwise if you bump a wheel on the reversed side without sending the SREV first then that motor will start turning (to correct for it) but goes the wrong way and just takes off and keeps running. If you aren't prepared for this then your robot can run you over....
I still have a set of the original position controllers that I am keeping for testing as well.
Robert
Yes, I've found the source code for the Position Controllers a while back. I recently started translating this C code to Spin but not long into the translation process, I had repeated experiences of d
+1 Then I could use the code to control my Parallax Stingray Robot Chassis
This is my intention. I have lots of motors with encoders and I'd really like some sort of universal object to control them. I think it's possible but when using encoders with lots of encoder transitions, it's not so important to calculate speed based on time between transitions.
I'm also concerned about the integer math involved. I'll likely need to add some sort of scaling correction for encoders with lots of transitions and encoders with very few transitions may also require the equations to be adjusted. I think I'm getting close to figuring this out.
Do you have encoders on your Stingray motors? I have some of the motors but I don't have encoders for them yet. Oh, or is this your hybrid robot with ActivityBot encoders? If so, you'll need to use a modified, single channel, version of the code (which I hope to write).
My Stingray robot is not my "FrankenBot" Robot.
My Stingray robot is named "Freddie".
Information about my "Freddie" robot build can be found in this thread on the Savage///Circuits forums http://www.savagecircuits.com/showthread.php?78-ZappBots-quot-Freddie-quot-Robot
The robot uses two 2.71"
I came here wondering if you had found a solution to this, because this is exactly what I find when using the ActivityBot->ArloBot C code that only uses "half" of the quadrature encoder.
I was wondering perhaps if using the full quad would help, but it sounds like the issues is deeper than that.
I think it's extremely likely the current Eddie firmware could be improved. The main improvement would be to use a more precise speed measurement over a shorter sample period.
I'm pretty sure the small over/under shoots seen in the video attached to post #41 could be done away with.
I believe my current PWM/encoder object can measure speed very precisely and very quickly by computing the speed from either a single encoder pulse or a single encoder cycle. The PASM code currently gathers the data needed for this precise speed measurement but the Spin code doesn't make use of this data. I think using the precise speed data in an adjusted PID algorithm would improve the observed performance of the Arlo hardware.
I really don't understand the C code well enough to comment much about it. It seems logical that using both encoder channels would improve the performance of the robot.
I intend to revisit this project in the near future, but I'm presently busy learning to control stepper motors.
And while there's room for improvement in the Eddie firmware, I'm reasonably pleased with how the code is working. As I mention in post #41, the robot moves to the exact encoder tick specified so the navigation ability of the robot can't really be improved without using other sensors. The main reason I would want to improve the code would be to use the robot like Niko's Artist robot. The small "searching" for the exact encoder tick which often occurs at the end of a maneuver doesn't cause a problem when navigating using encoders but it would interfere with a robot trying to draw smooth lines. It would be nice if the Arlo could be commanded as precisely as the Scribbler 2 (adjusted for encoder resolution).
I have several robot projects which use quadrature encoder feedback. Once I have my CNC router up and running, I'll likely continue work on getting my quadrature encoder code working better.
Edit(4/19/20): Fixed links.
Just writing to provide some feedback. I install the Eddie program with your firmware and it works great. Thank you for all your effort.
Even though it's works fine in the direct mode, I couldn't made it works with the obstacle avoidance. In with pins should I connect the sensors? Any ideas?
Thank you.