Shop OBEX P1 Docs P2 Docs Learn Events
NASA/UAH autonomous beacon navaigation and obstacle avoidance robot using Paral — Parallax Forums

NASA/UAH autonomous beacon navaigation and obstacle avoidance robot using Paral

NASA Robotics TeamNASA Robotics Team Posts: 24
edited 2008-09-27 23:43 in Robotics
Hello Parallax forum readers:

We are one of the NASA Robotics Academy teams from the Marshall Space Flight Center. Our project is called "Return To The Moon" and we will be working on this project for the next 9 weeks and our goals are quite ambitious. This project is a joint cooperation between both NASA and the University of Alabama in Huntsville (UAH).
Our project consists of designing and building a prototype autonomous robot. The robot should have the ability to:
- navigate using beacons in known locations
- travel to a specific location in a field
- avoid known obstacles and hazards
- avoid unknown obstacles and hazards

For our project we are going to be using a Parallax QuadRover and adding the capabilities needed to achieve our goals. We are still in the design phase, but some of the systems we are looking to use are Time Domain Ultra-Wide Band (UWB) Radio Transceivers for the beacons, a SICK LIDAR for obstacle detection, and the Propeller chip for navigation and controls. We are extremely excited about the project and the challenges that lie ahead of us.

We will try to update the forum thread every Thursday and give you guys an idea on the progress that the project is making. We will post documents, pictures, and links to videos during to the course of the building and programming so keep checking back with us.

Thanks guys,
NASA Robotics Team

Link to our project info:
http://education.nasa.gov/edprograms/descriptions/Marshall_Robotics_Academy.html
http://www.nasa.gov/
http://www.uah.edu/

Comments

  • ercoerco Posts: 20,256
    edited 2008-06-05 23:40
    Sounds quite interesting and definitely doable. Is this ultimately a contest,·proof-of-concept model, or a grant-winning demonstration? The answer is probably within the links you sent, but perhaps you could give us·an executive summary here. Thanks, and good luck.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    ·"If you build it, they will come."
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-06-06 02:33
    This is a proof-of-concept / research project being sponsored by NASA. It's a summer project giving us a chance to help research aspects of traveling to the moon. One of the areas of research is the feasibility of autonomous navigation on the moon and a possible to solution to this problem. Hope that answers your question [noparse]:)[/noparse]
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-06-18 16:12
    Okay guys,

    Sorry we missed the update last week, but here's the progress up until this point.

    - Last week we gave a Feasibility Presentation on using beacon navigation on the moon. We went through the beacon types, how to compute a position solution, and types of sensors that can be used for navigation and obstacle avoidance.
    - We're building a power supply and battery charger unit to power the LADAR and radio beacon receiver.
    - We received the RC controller for the QuadRover, but we're awaiting the software to program the Propeller. We used the controller for some basic testing and verifying the system operation.
    - We wrote basic movement code (forward, reverse, left, right, throttle, etc) and tested all of the possible movements to see how the Rover reacts and operates.
    - We purchased a PropNIC (ucontroller.com) and interfaced it to a Propeller. We're able to ping the device, but I have yet to see TCP or UDP traffic even though data is traveling through the hub (We etherealed to data). We need UDP working to get data from the radio beacons.
    - We got better documentation on the LADAR and are sending/receiving data from the Propeller. Still have formatting/decoding problems, but they are being worked on.

    Next week
    - Get the UDP data from the Time Domain radios on the Propeller.
    - Begin coding the Fuzzy Logic navigation algorithms.
    - Complete the systems power supply
    - Complete brackets and mounts for the devices.
    - Get the RC controls working so we can do terrain testing.

    Thanks guys,
    NASA Robotics Team
  • JohnPJohnP Posts: 15
    edited 2008-06-25 04:57
    Just a introductory word from the project instructor !

    In the links that Josh provided, you can see that NASA sponsors interns
    for their Robotics Academy. There are teams at Marshall Space Flight
    Center in Huntsville, AL; Goddard Space Flight Center in Greenbelt, MD;
    Ames Research Center, in Moffett Field, CA. This year there are three,
    four member teams at MSFC. One of the teams is working on their project
    as part of a class at the University of Alabama in Huntsville, Department of
    Electrical and Computer Engineering under the mentor, Dr. Yuri Shtessel,
    and the instructor, myself.

    The first order of business was to get a decent robot. No small challenge.
    Fortunately for us, Parallax gave us a generous educational discount on the
    QuadRover (and through in extra sensors and Propeller processors). I can't
    think of a better robot for this project and outdoor experiments in general.
    THANKS PARALLAX.

    The challenge that NASA proposed was a beacon navigation system. For
    reasons of practicality in an outdoor environment, radio beacons were chosen.
    We are fortunate that Huntsville, AL is home to Time Domain, a company that
    makes an ultra wide band radio that, among other things, can be used for
    localization. Time Domain has generously loaned several of their radios for the
    summer project. These will allow, under current radio configuration, distance location
    with 1 foot accuracy over 100 meters at a 8 Hz update rate.

    The project has several intermediate goals:

    1. R/C control will be used to test navigability over different terrain,
    and latter as an emergency cutoff device.
    2. Autonomous navigation from point-to-point using radio beacons for localization.
    3. Autonomous navigation to a goal position avoiding mapped hazardous areas.
    4. Autonomous navigation to a goal position avoiding mapped hazards and unmapped obstacles.
    5. (optional) Dead reckoning, upon beacon loss, to a safe position or to an area covered
    by other beacons.

    Furthermore, the team will be using a fuzzy control algorithm proposed by staff at JPL
    for lunar and planetary navigation.


    Now I'll get out of the way and let the team do their magic.

    John Piccirillo
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-06-30 14:04
    we are uploading some pictures we took during our team work. Please take a look on it.

    Thanks guys,
    Robotics Team
    150 x 113 - 4K
    150 x 113 - 5K
    150 x 113 - 4K
    150 x 113 - 5K
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-06-30 14:14
    We had lots of fun while working...i hope you will also have fun while watching our pictures and videos
    150 x 113 - 4K
    150 x 113 - 4K
    150 x 113 - 5K
    150 x 113 - 6K
  • jazzedjazzed Posts: 11,803
    edited 2008-06-30 16:55
    Interesting pictures. Could the possibly be bigger?

    As a kid I was always amazed at the displays in the "space center" in Huntsville.
    Last time I drove by I saw a Space Shuttle there ... pretty inspiring stuff.

    Thanks for sharing.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-06-30 19:53
    Yeah .. we're going to post some life size pictures!!! We're going do a bit of testing and we'll try to post those videos tonight.
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-01 04:38
    Sorry for smaller pictures. We are also uploading more pictures. So please enjoy!!

    I am not sure if you guys are familiar with LADAR. If not, please read the following sentences:

    LADAR - Laser line scanner returns distances up to every half degree over
    a 180 degree sweep approximately every 80 milliseconds. This will be used
    for obstacle detection. Industrial strength construction, weighs about 8 lbs.

    Thanks Guys,
    NASA Robotics Academy

    Post Edited (NASA Robotics Team) : 7/1/2008 4:59:36 AM GMT
    425 x 318 - 26K
    575 x 492 - 30K
    645 x 483 - 34K
    387 x 482 - 28K
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-01 04:41
    More wonderful pictures..we hope you guys are enjoying.


    Radio - This is a ultra wide band radio transmitting at 4.7 GHz with
    a bandwidth of
    3.2 GHz. Four of these radios will be located in a field and one on
    the rover.
    The system will provide a robot position over a 200 x 200 meter field
    with 1 foot
    accuracy. Unit is about 4 x 6 x 2 inches.

    If you guys have any questions about these pictures, please feel free to ask us.

    Thanks guys,
    NASA Robotics Academy

    Post Edited (NASA Robotics Team) : 7/1/2008 5:01:21 AM GMT
    644 x 381 - 29K
    641 x 480 - 43K
    538 x 484 - 25K
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-01 16:05
    As we said earlier, we had our feasibility presentation couple of weeks ago...This presentation includes a description of the design problem, project objectives, the design concept (proposed approach or solution) of the beacons and robot, and the results of the research. This study also includes the feasibility of using a beacon-based navigation system on the moon. Preliminary cost data and a tentative project schedule is also included in the presentation.

    Thanks guys,
    NASA Robotics Team
    smile.gif
  • jazzedjazzed Posts: 11,803
    edited 2008-07-01 18:21
    I suppose using a QuadRover is only for proof of concept here on earth.
    Perhpas your control system would be adapted to a functional moon rover.
    Any chance that a Propeller processor would go with?

    The pics look great. You have a fine looking team.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-08 01:49
    hey guys,
    we had our design review presentation on last thursday and we are also uploading our presentation slides..feel free to ask questions.

    Thanks,
    Robotics Academy

    Post Edited (NASA Robotics Team) : 7/9/2008 12:34:32 AM GMT
  • Greg NortonGreg Norton Posts: 70
    edited 2008-07-08 23:58
    I would be especially interested in your experience with the Time Domain radios and how easy or hard it was to interface them with the Propeller.· I browsed their website, but the materials there didn't make a whole lot of sense to me.· For example, what are the components of this system?· I see a radio, but does it use reflectors or some other method as the beacons?· Are the beacons active and a receiver on the rover?

    Any details you can offer would be greatly appreciated.

    Thanks.

    Greg
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-09 01:38
    Greg,

    The Time Domain radios use the Ultra Wideband frequency to communicate with each other. Ranging and positioning is done only using the radios. You can setup a network of known radio beacons positions and you can range a mobile node (or multiple mobile nodes) to get their positioning data. They have a test program that shows the nodes and there positions on a 2D coordinate system. I also believe the radios are capable of 3D positioning, but for our application we didn't really need that capability. Interfacing to the devices are quite easy. They use UDP packets. Once you send a "connect message", the radio just keeps sending packets everytime any node on the network either send user data or new position data. I can get you the contact info of the Time Domain person helping us with the radios if you would like. He would be better able to answer some of your specific questions.
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-09 02:51
    We have some videos!!!

    www.youtube.com/user/NASAROBOTICS
  • Greg NortonGreg Norton Posts: 70
    edited 2008-07-09 18:19
    Thanks for the offer. Yes, please let me know contact information (PM me if you don't want to make it public). If you know of any technical documents on their website that are public, a pointer may do just as well. I'd like to know more about how the theory of operation and details on the specific radio configuration and how the data is conveyed.

    Greg
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-17 04:27
    In these pictures the robot is presented with almost all the hardware mounted...the only things that are missing are the power supply box and encoders. You can see the LIDAR mounted on the front then the batterie box that supplies power for the LIDAR only (because we are missing the power supply box).We also installed in the side a panic button that is connected in parrallel with the main kill switch so we can use either one (This avoids the neccesity of using a cord attached to the robot's kill switch). Finally in the back a mobile redio beacon which is being temporally powered by the quadrover's main battery .
    This model is subjected to changes

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    http://www.youtube.com/user/NASAROBOTICS
    3264 x 2448 - 1M
    3264 x 2448 - 1M
    2448 x 3264 - 1M
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-17 04:49
    Here are some pictures of the test we have done with beacon navigation system. The robot is able to drive in a straight path towards a goal, and once it reaches that spot in the virtual map it stops automatically.

    The robot is not able to turn since we have not finished writing the algorithm to do this. This one was just an initial test

    we will post some videos of this test on youtube

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    http://www.youtube.com/user/NASAROBOTICS
    802 x 601 - 99K
    483 x 639 - 66K
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-07-18 15:14
    We posted some more videos on Youtube. The new ones are titled "Beacon Navigation Test 2.X - NASA Robotics Academy" The first set of videos had the robot traveling straight to a goal without any turning. Our code had too many cogs, but we wanted to test the ability to travel to and stop at a waypoint. The 1.X series of videos were those tests. We compressed the code and got it to operate within the 8 cogs. Now the robot can determine it's heading and the waypoint's heading and travel to a goal. We also added two more Ultra Wideband radios in the system so we now have 4 coordinate nodes and one mobile node. The robot is a bit sluggish turning right (which some of the videos show), but you can have the robot facing 90 degrees from the waypoint and it will turn and travel towards it. Once we add multiple waypoint capabilities and fix the sluggish right turn, we will have completed all of the waypoint navigation goals. Then we'll focus on adding known obstacles and unknown obstacles to the robot's navigating complexity. Let me know if you guys have any questions or want some clarifications. The more questions the better (now that we have something to show [noparse]:)[/noparse] !!

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    http://www.youtube.com/user/NASAROBOTICS
  • SRLMSRLM Posts: 5,045
    edited 2008-08-02 05:50
    Will your Ladar be compensated for the tilt of the robot? For example, if it were to be going downhill and it reached the bottom, it might read as a close object and hence go really slow... Also, will you use the GPS? It seems more feasable to use a satelite beacon on a new planet than a surface beacon that requires four stations for 40000 square meters. Although useful, these beacon navigation techniques seem a bit limited. Mayhaps you should lauch a couple of geostationary satelites...
  • NASA Robotics TeamNASA Robotics Team Posts: 24
    edited 2008-08-02 13:42
    In answer to you first question about the tilt of the LADAR, we thought about that problem. Firstly we're not running the robot in autonomous mode on hilly terrain. We tested the capabilities of the QuadRover (in RC mode) on hills, sand, gravel, rocks, grass, etc to see how the system reacts and get a feel for the dynamics of the robot. We have the LADAR mounted 2-3 feet of the ground, so it's sitting quite high. We're only going to use it for large obstacle detection and avoidance at the moment. For the hill problem, we're looking at putting accelerometers on the robot to measure the tilt of the terrain. The case where the robot is traveling down hill, we can take this into account when the robot gets closer to the bottom and not count it as an object.

    In reference to the GPS. Our project was dealing with the feasibility of using lunar beacon based navigation as an interim solution to GPS on the moon. We can use GPS (and the QuadRover has a GPS attachment), but there won't be satellites around the moon for a while (too costly and could not be utilized by enough people yet), so this system would be an interim solution.

    We essentially had 8 weeks to get the robot navigating and we are able to navigate to a waypoint and obstacle detect. We're do some lab simulation on obstacle avoidance and it's working properly, this coming week is our last week and we're going to take the robot outside to test those features.

    Our signature has a link to our Youtube page if you would like to see some of the videos.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    http://www.youtube.com/user/NASAROBOTICS
  • JohnPJohnP Posts: 15
    edited 2008-08-08 20:44
    Today was the last day for the NASA summer interns and I would like to
    update some of the work they accomplished.

    1. The QuadRover autonomously navigated to three, pre-determined
    waypoints spread over a 30 m x 30 m field. The rover was started
    in various positions and orientations. Four radio beacons were placed
    at the field corners. The rover was not constrained to start within the
    30 m x 30 m perimeter. This worked quite will and videos will be posted
    shortly.

    2. The on-board ladar was used to avoid unknown obstacles. On approaching
    an obstacle the rover would turn to the side and continue moving. If the
    obstacle was inside a pre-set distance, the rover stopped. The avoidance
    was inverted so that the rover would "follow-the-leader". Videos of both of
    these modes will be posted.

    The plan is to continue adding sensors and capabilities to the rover in the coming
    Fall semester. Some goals:

    1. Interface a PC (probably running Linux) with the Propeller. Right now we are
    using all eight cogs and the memory is about 80% full (this includes a chunk of
    fuzzy control logic that was written, compiled, and simulated but not tested on
    the rover).

    2. Install wheel encoders and a rate gyro for dead-reckoning, i.e. a crude IMU.

    3. Navigate under fuzzy control software to integrate going to a goal while avoiding
    known (mapped) hazard areas and unknown obstacles.

    We'll keep you posted.



    Attached is the team Final Presentation.

    John Piccirillo, Ph.D.
    University of Alabama in Huntsville

    Post Edited (JohnP) : 8/13/2008 4:58:01 PM GMT
  • JohnPJohnP Posts: 15
    edited 2008-08-13 17:03
    If you haven't already checked out the videos, five new ones were
    posted to

    www.youtube.com/NASArobotics

    on August 8th.

    Enjoy.

    John Piccirillo
  • SRLMSRLM Posts: 5,045
    edited 2008-08-13 18:31
    If you want to have the robot know that it's going uphill/downhill you'll want to use gyros. An accelerometer would read acceleration the same as tilt. Plus, you'll need both in order to do dead reckoning (3 accelerometers and 3 gyros).
  • JohnPJohnP Posts: 15
    edited 2008-08-29 14:51
    A new team will be continuing where the summer team left off on
    continuing to develop the QuadRover as a lunar and planetary rover.
    They will use the radio navigation system and laser object detection
    system began by the NASA summer robotics team and add capability
    to the navigation algorithm. Since this is a new team, they will be
    introducing themselves in a separate thread soon.

    John-
  • Luiz mauricio mionLuiz mauricio mion Posts: 77
    edited 2008-09-08 09:33
    Interesting and the base of sensors for this Robot, has that to be agile and in small ratios of size.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
  • JohnPJohnP Posts: 15
    edited 2008-09-27 23:43
    For those who have followed this thread, there is a new team taking
    over where the summer NASA Robotics interns left off. We will continue
    news and discussion about the follow-on effort on a new thread entitled
    "UAHuntsville Team Luna".

    John Piccirillo
Sign In or Register to comment.