Shop OBEX P1 Docs P2 Docs Learn Events
learning robot project — Parallax Forums

learning robot project

jonlink0jonlink0 Posts: 11
edited 2011-03-07 14:08 in General Discussion
Hello all,


This is a robot I've been planning for a while, but before I get started, I would like your input.

EDIT:
In a nutshell, I am building a simple robot based on a neural net and gradually increasing it's complexity as far as realistically possible, starting with a braitenberg vehicle (a robot where a photosensor is attached directly to the motor on the same side with no microprocessor, just a direct circuit connection. This starts light-seeking behavior.)

Since this is a complex project, it will progress in stages.
The software will initially consist of a 2-neuron braitenberg vehicle, learning optimal network weights through the use of a genetic algorithm. The genetic algorithm will test each neural network in real time by letting it control the robot, taking perhaps up to half an hour per generation as it tests each neural network individually for 5 minutes, evaluating fitness based on easily-defined measurements (such as an accelerometer, for example. see http://headphones.solarbotics.net/learnbot.html ). No processing will be done outside of the robot, such as on a supervising external computer. Once this basic scaffolding is complete, it should be able to be scaled up to a goal of about 30 neurons and numerous sensors, therefore achieving the same level of intelligence as a rotifer (a really simple "bug"). Because of this method of learning, a recurrent neural network is a viable option.


The current hardware I envision is a solarbotics scoutwalker3 base, a sumovore bs2 brainboard add-on, and either the spin stamp or the stamp stack 2p microcontroller ( http://www.hvwtech.com/products_view.asp?ProductID=565 ) . Although the spin stamp is a faster multicore processor, the ss2p has upgradeable eeprom (necessary for large ANNs) and is in-system programmable.


Here are some questions:


Which is better for an artificial neural network? The stamp stack 2p or the spin stamp?


Is an ANN-based cellular automaton, such as brian's brain, a viable option? Rotifer CNS neurons are often bipolar (1 input 1 output), so the cellular automaton settings could be restricted down to a von-neumann (4-cell) neighborhood without much loss of realism. Would using a von-neumann neighborhood be less resource-intensive than a moore-neighborhood (8-cell)-based cellular automaton? What other changes to Brian's Brain, if any, would you recommend?



Have you done a similar project?



I want to be as realistic as possible since this is a project I am going to see through, so please provide constructive criticism. Thank you!

EDIT:
In a nutshell, I am building a simple robot based on a neural net and gradually increasing it's complexity as far as realistically possible, starting with a braitenberg vehicle (a robot where a photosensor is attached directly to the motor on the same side with no microprocessor, just a direct circuit connection. This starts light-seeking behavior.)

Comments

  • ercoerco Posts: 20,260
    edited 2010-12-25 23:40
    Um, sounds great. Good luck with all that! A university project?

    Honestly, when I hear that many buzzwords in one sitting, my mind glosses over. Have you built any robots, or worked through the basic BoeBot program yet? That's a great place to start learning and managing expectations. It sounds like you have some lofty goals in mind. Good for you.

    What you describe is more easily achieved in a simulation than real life. Reality sucks and pretty quickly, you find that the real-world has lots of restrictions and difficulties that force people to drastically simplify their dream robot to something that they can actually build and program.

    I have seen lengthy dissertations about robot "behaviors" that IMHO are hyper-detailed reports of uber-simple robots with basic light sensors and bump sensors. They go on to describe the "virtual intelligence" of these "layered behavior" robots as they bump along. No matter what happens, it's simple code responding to simple sensor input, and there are some random effects of the environment and obstacles. OK, moderately interesting to some of us, but not revolutionary.

    Sounds like you plan to delve deeper than those simple robots if you're talking neural networks and lots of memory. A Stamp of any sort might not be your first choice. A Propeller or a PC on wheels is a better place to start.
  • jonlink0jonlink0 Posts: 11
    edited 2010-12-26 03:50
    Sorry, I was kinda tired when I wrote that, I'll try and modify it for readability...

    Although I am a university student, this is not an official university project, more of a hobby. I have some experience in parallax robotics though and have made boe-bot braitenberg vehicles in the past and that was, of course, a very simple project, kinda like what you described. I do plan to do my graduate work in robotics so this is more of a preparation for that. In the beginning this robot will basically be a glorified braitenberg vehicle, which is why I think that at least the first part could be accomplished.

    I do not mind drastically simplifying this robot, I can certainly add complexity later. That's part of why I'm here, so I can modify the design.

    I've posted an outline of this project elsewhere and I got a similar response stating that I should simulate it first. How do you recommend I do that?

    Thank you very much!
  • LeonLeon Posts: 7,620
    edited 2010-12-26 06:48
    Write your software in C on a PC. When it is working you will be able to select a suitable target processor for the robot.
  • Mike GMike G Posts: 2,702
    edited 2010-12-26 07:53
    @jonlink0, About 6-7 years ago, I created a robot that appeared to think. Really, it was clever programming mixed with randomness and an occasional bad sensor reading. You could drop the robot in room and eventually the robot could make its way out of the room without getting stuck. The brain was a STAMP 2p. However, I used several task specific microcontrollers. One controlled motors while others handled ultrasonic and IR vision. The behavior of the robot was very simple. If the robot saw something in the way, it looked around to fine a clear path. A little programmed randomness and unfiltered sensor readings caused the robot to execute random behaviors like backup and look around for no particular reason. This made it seem like the robot was thinking. Without the randomness, the robot would occasionally get stuck and oscillate. I could handle the oscillation but thought the result of randomness was interesting and it took less code.

    My little endeavor was no where close to a learning neural network, yet it took several micorcontrollers, sensors, busses and included concepts like hardware abstraction and interfaces. Plus I had a pretty good idea what I was doing. Your project is huge and IMO nothing like the BEAM link you provided. You'll use BEAM type stimulus but learning... Wow

    You’ve been given very good advice. Model your work on a PC. Use C, or whatever language floats your boat, then as Leon said, "you will be able to select a suitable target processor(s) for the robot".
  • HumanoidoHumanoido Posts: 5,770
    edited 2010-12-26 23:43
    For your robot projects, you'll be able to go with a small number of boards or many boards to build the large robot brain (being developed over here) This (open source) project has an open request for brain filling ideas. The hardware is now being assembled. The final result will be a giant hardware brain of sufficient size and capacity to hopefully do a fair job at AI. The dream is to develop a relatively simple algorithm that could be run on the brain for learning. Many people have talked this, but no one has anything to show.
  • LoopyBytelooseLoopyByteloose Posts: 12,537
    edited 2010-12-27 05:11
    I suspect most of us want robots that handle some physical task that shows visible results rather than merely thinking - this approach is something very slave-like. After all, watching a autonomous human involved in thought is not very interesting.

    And trying to find out what those thoughts are can be rather challenging. But watching a robot wander and learn a maze is amusing. It may be nowhere near real thought; like a parrot may be nowhere near real speech - but it seems to be a glimmer of intelligence.

    If a robot were to really be thinking at a deep level, I suspect that most of its thoughts would be defensive. After all, that is most of what humans occupy their thoughts we - their own comforts, goals, and concerns for selfish outcomes.

    A neural net might require more capacity for processing (in terms of speed) and recall in the terms of organized storage of memory. It might be best to start out with a SBC (single board computer) that can take on Windows or Linux as an OS and heavily multi-task rather than building a rolling wandering device. In that way, you could explore languages such as Lisp and Forth in great depth. (The Propeller does support Forth. I've no idea if it can use Lisp as well).

    But your approach seems to be from the top down in a very ambitious context. I suspect more might be achieved from a bottom up approach, with artificial intelligence as the long term goal. A lot of people and resource have been down so many roads already in search of a learning robot and even though we have productive and useful by products -- the robot with soul has yet to arrive.

    Since robots need no sleep, a successful learning robot might just exhaust your abilities to teach it. For instance, nearly all robots can easily have the capacity to speak; but trying to install the ability to listen is quite another thing and just a few words with a meaningful language are the usual result. Having a robot that self-modifies and rewrites its own software is another challenge that might keep you busy for quite some time.

    At the end of the day, when you ponder pursuit of these kind of topics, it seems that you can just as easily replace the term 'robot' with 'computer' and skip the wheels, batteries, and some of the sensors. The Propeller is a great little board for introduction to a lot of aspects of computation, but it is up to you to define your goals and keep up. Good luck, a journey of a thousand miles always starts with a single step.
  • ercoerco Posts: 20,260
    edited 2010-12-27 09:00
    Well said, Loopy B!
  • HumanoidoHumanoido Posts: 5,770
    edited 2010-12-27 10:28
    I suspect most of us want robots that handle some physical task that shows visible results rather than merely thinking - this approach is something very slave-like. After all, watching a autonomous human involved in thought is not very interesting.
    This is the key point. Thinking should be accessible in some way by output. In SEED, it displayed thoughts on a debug screen. However, in the bigger brain we're designing, it additionally will have motor functions which can move a robot in various ways. Human thinking does not always have visible results but if you can peek into a dream it can be very interesting.
  • Mike GMike G Posts: 2,702
    edited 2010-12-27 10:29
    I agree with Looby B in that it's all about the approach. IMO, picking a microcontroller comes much later in the project. What about designing a network architecture with smart nodes. Funnel all messaging to a central processor and data store. The nest step is to make sense of the data. Over time this information could form basic memory. The memory could then be shared by other robots reducing the learning curve. I mentioned this approach in the big brain post.

    For a moment, think about what it takes to open a door. Wouldn't it be cool if your robot simply accessed a service to figure that out. Local processes can handle balance ect.
  • HumanoidoHumanoido Posts: 5,770
    edited 2010-12-27 10:41
    Mike G wrote: »
    The memory could then be shared by other robots reducing the learning curve. I mentioned this approach in the big brain post. For a moment, think about what it takes to open a door. Wouldn't it be cool if your robot simply accessed a service to figure that out. Local processes can handle balance ect.
    Ironically I was just thinking the same. Let's say thousands of people were developing these common apps and posting each on the net. The brain would access the open door by doorknob routine, and have access to millions of other routines. The intelligence "brain" decides which routine is needed and the robot performs the action to grasp the specific knob/lever/handle and open the door.
  • Mike GMike G Posts: 2,702
    edited 2010-12-27 11:03
    I've been working on stuff like this for several years. My latest creation (a year old now) takes advantage of IK service to write basic lines using a 2 and 3 DOF arm. The arm contacts the service and describes its makeup (dimensions etc.), then the robot requests results sets (there are many solutions to the problem) for a line defined as (x,y) (x,y). The robot takes the multi result sets and determines the best orientation to draw the line. I store and analyze this data so the next time the robot can make a faster decision based on its current orientation.
  • ercoerco Posts: 20,260
    edited 2010-12-27 11:33
    Actually, the notion of "displaying" machine thought in progress is quite interesting. Recall the old 1960's sci-fi movies & TV where a grid of computer blinking lights was sufficient to get the idea across. Then came Knight Rider's scanning front light. I bet I'm not alone in thinking that watching a computer count in binary is nifty. And there are techniques that show human brain activity in different colors. Seems like a well thought-out video or LED display showing advanced machine thought would be cool. Maybe a big LED cube with lots of colors...
  • HumanoidoHumanoido Posts: 5,770
    edited 2011-03-02 06:43
    On the original Star Trek series, in the early episodes, the ships computer said, "Working," so you knew it was thinking.

    So as a followup, how did this project turn out???
  • jonlink0jonlink0 Posts: 11
    edited 2011-03-07 14:08
    Worry not, most of the hardware is out of the way, I just got the robot a few days ago in the mail.

    http://skaterj10.hackhut.com/2011/01/24/hello-world/

    By the way I highly recommend solarbotics for BEAM-based robotics projects, since parallax and solarbotics cater to different interests quite well in their respective areas.
Sign In or Register to comment.