Shop OBEX P1 Docs P2 Docs Learn Events
Making a "learning" robot with the basic stamp — Parallax Forums

Making a "learning" robot with the basic stamp

RobofreakRobofreak Posts: 93
edited 2007-11-19 06:37 in Robotics
Hello everyone!

I need some advice. I have always wanted to make a robot that can "learn" from it's actions. Like how to walk, without being programmed to know how to walk, or something. Is there a way for the basic stamp to rewrite parts of it's program using the WRITE command? My idea isn't very "nuts and bolts", it's kind of a big picture idea. If anyone has any advice, please let me know.

Thanks!

▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
Austin Bowen,

Robo-freak.com

"One must watch out for mechanics. They start out with a sewing machine, and end up with the atomic bomb"

Comments

  • Mike GreenMike Green Posts: 23,101
    edited 2007-11-17 02:40
    Theoretically, yes, you could do it. It's not practical though since the Stamp byte code is not documented, so you'd have no idea of what to rewrite or what to change it to. You could design a virtual machine for a robot programming language and write an interpreter in PBasic, then have the Stamp rewrite that as it learns. Generally, systems like this need a large computer, like a laptop or a small Linux box that could run something like LISP or Prolog which are designed for artificial intelligence type applications.
  • RobofreakRobofreak Posts: 93
    edited 2007-11-17 12:53
    Okay, thanks Mike! I'll look into that. I know it's a big idea, but it would be great if I could build a small humanoid and make it learn how to walk and stuff, by watching people, and experamenting on how to move. Wow. That would get me so much credit for school...

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Austin Bowen,

    Robo-freak.com

    "One must watch out for mechanics. They start out with a sewing machine, and end up with the atomic bomb"
  • Mike GreenMike Green Posts: 23,101
    edited 2007-11-17 13:41
    Remember that people have been working on this for many many years and are only recently beginning to successfully bite off small bits of the problems. There was one university group that recently showed a walking robot that had "learned" how to do so. What's not so obvious from the demonstration and news articles is that there's a lot of computing power behind this and a huge amount of time and the underlying system is designed for learning this. We're beginning to understand some of the reflexes built into the spinal cord for walking, and a lot of the basic mechanisms are "hard-wired". They need a lot of experience and learning to work properly, but some of the basics are built-in.
  • RobofreakRobofreak Posts: 93
    edited 2007-11-17 18:31
    Check out this video: http://dailyplanetclips.ca/ Look for a clip on the scroll bar on the right for a video called "a robot's reasons". you should see a robot that looks like an octopus thing. That's my inspiration...

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    Austin Bowen,

    Robo-freak.com

    "One must watch out for mechanics. They start out with a sewing machine, and end up with the atomic bomb"
  • Phil Pilgrim (PhiPi)Phil Pilgrim (PhiPi) Posts: 23,514
    edited 2007-11-17 19:50
    A program can "learn" not only by changing its internal structure but also by modifiying the parameters (i.e. variables) that govern its actions. Self-tuning PID loops are an example of the latter. For any kind of learning to take place you need reinforcement — both positive and negative — in the form of feedback. Without feedback, there's no way for a learning robot to direct its behavior on a trajectory that leads to improvement. In biological critters, pleasure and pain are two of the main sources of feedback. In the lower animals, these will operate on a short-term stimulus/response level, while in primates, deferred gratification through planning (i.e. intelligence) will also play a role.

    For a robot to learn, you have to decide how the feedback mechanism will work, in other words, not only how your creature's behavior is to be "rewarded" or "punished", but how those stimuli effect changes in later behavior. It's not easy, but the BASIC Stamp may be capable if you don't set your sights too high. To get your feet wet, you may want to try coding a tic-tac-toe player that learns from its successes and failures (i.e. winning and losing) to improve its play. This should be well within a BS2's realm of capability. For more complicated stuff, a member of the BS2p family, with its extra memory, would be a better choice. And there's always the Propeller for more advanced learning platforms.

    There's a wealth of literature available on the subject of machine learning. An early example, which still serves a foundational role, is Samuel's Checker Player, detailed in Feigenbaum and Feldman's classic compendium (from 1963!), Computers and Thought. The ideas presented may seem somewhat dated, but after 44 years the book is still in print, so that should tell you something.

    Good luck! You're embarking on an interesting journey!

    -Phil
  • Peter VerkaikPeter Verkaik Posts: 3,956
    edited 2007-11-19 06:37
    Tic-tac-toe example:
    http://forums.parallax.com/showthread.php?p=689877

    regards peter
Sign In or Register to comment.