Shop OBEX P1 Docs P2 Docs Learn Events
Question for all Engineers and Hobbyist. I am stumped. Advice needed. — Parallax Forums

Question for all Engineers and Hobbyist. I am stumped. Advice needed.

MovieMakerMovieMaker Posts: 502
edited 2010-04-13 21:44 in Robotics
·
OK, I am going to share my situation in hopes that someone out there can answer some of my questions. I will post this on several boards.

Here is what I have: two propellers 8 core cpus, three basicstampIIs with different speeds, 11 small robots, one arduino, one arduino mega, several single core computers and one dual core and one quad core. One laptop and one net-book. One robot head. Broadband internet wired and wireless, linksys WRT54GL router. I have a somewhat entry level experience on Basic and C++.

I have mastered the art of obstacle avoidance. But, when I try to do more, everything gets so slow it becomes not practical.

I have two web-cams that can recognize faces, tell who they are, track moving objects, my voice is recognized and I can hear the answer in a human voice. It can recognize colors and flesh tones.

There is a data base that can interact and learn from me speaking to it. I have not tried OCR yet, but I am sure it will work. I have not tried object recognition, but I am sure it will work. I have used chatterboxes and Open-CV, and many more open source stuff.

Now, the problem is, all of this is scattered out over many robots and computers. How would one go about tying all of these items together into ONE machine? I am lost of how to do this. Where do I start? I have a deep desire to experience AI in a machine. Not to give it orders or commands , but to communicate with it and have it have it's own free agency. To watch it learn and grow and become more than just the sum of it's parts.

Oh, BTW, my wife says I have NO MORE money to dump into this stupid project. What would you do if you were in my shoes. Where would you start?

I hope there is an Engineer out there or some hobbyist that can answer that question for me.

Thanks!

MovieMaker

yhmmc@yahoo.com

Comments

  • Mike GreenMike Green Posts: 23,101
    edited 2010-04-13 16:29
    There is no "answer" to the question. This is the stuff that occupies the careers of robotics researchers at a variety of institutions around the world. The truth is that you don't really want to tie all this into one machine. You really want a variety of semiautonomous processors doing various tasks, but communicating with some organizing center. This is very much like most biological systems where there's a lot of local processing going on in various "reflex" centers and some of the raw or low-level data as well as the processed data is forwarded on to higher levels. Relatively high level actions are then communicated to lower levels where the details of implementing them are handled.

    To simplify things, you may want to combine some things. It might be useful to eliminate the Stamps and move their functions into part of one of the Propellers or maybe add a Propeller to replace them. You may be able to replace the laptop and the netbook with just the laptop. You may be able to replace the Arduinos with a Propeller. It all depends on the details.

    You say "I have a deep desire to experience AI in a machine. Not to give it orders or commands , but to communicate with it and have it have it's own free agency. To watch it learn and grow and become more than just the sum of it's parts." Unfortunately, no one has really done this. There are large groups that have bitten off pieces of this and implemented them. There's one group that implemented an AI that exists in a "blocks world" where there's a bunch of blocks. You can ask it questions and ask it to do things like stack up certain blocks or move the blocks one at a time to another part of the space and it figures out what you're asking and how to sequence the actions to make sense to accomplish what's requested. This works because the "world" is very limited and the vocabulary and context is limited. I've seen videos of a cooking robot in Japan that can make a kind of pancake with voice control. It uses binocular vision to monitor what's going on in front of it. Again, this works because the context is very limited and the vocabulary is very limited. The verbal exchange between the robot and the customer is constrained so that most answers are limited to one or a few words from a limited group of words.
  • MovieMakerMovieMaker Posts: 502
    edited 2010-04-13 16:40
    Thanks, Mike for the answer. I am just a dreamer. But, I know that the technology is getting closer and closer. It was good to hear from you again.

    smile.gif
  • SandhuamarinderSandhuamarinder Posts: 85
    edited 2010-04-13 16:41
    Man u seems like smart enough. Start something we will give u backup here. If u will get stuck anywhere.

    If i have all that stuff. I think i will make small chopper with cameras and use joystick with it and see the view of bathrooms of your neighbours. Just kiding.

    Man start anything what u think u can make . Depends upon u how much time u can spend on it. Depends upon programing needs alot of patience for long project bcoz u get tired and frustrated if things doesnt work and u will like throwing that (expletive) out of ur house. Rest depends upon much capability u have.

    If u wants to spend more money then make a real helicopter and have fun with it and with ur wife. She will like it. And will love u more hahaha.


    Man i am just a student and i am giving u advice hahaha Sorry.
    But man dont take it seriously make what ever u like. Make something cool for ur wife. Make a cool security system. So u can see who is out on ur door and put HMI (expletive) out on door so person can play game on it while waiting for u to open the door. etc. Put speaker there and mic stuff which recognize human comands and do some nice programming that robot gives comamnds back and have conversation with him. And then show the data on LCD which is inside home. Name ......., Occupation..........., All the stuff on LCD inside the home. And person speaking outside of your house. And use Bell as pushbutton which make all of the system on.

    Isnt it cool and who ever gona come to ur house gona think u are awsome and very cool what u think man. There is lot of stuff out there u may have to buy to finish ur project. Small small things like screw to LCD makes ur bills go very high.
    Man there are alot of ideas in mind.

    I am just student so dont take anything seriously. So these are my ideas. Parallax is always here to help u man.

    Post Edited (Sandhuamarinder) : 4/13/2010 4:46:34 PM GMT
  • MovieMakerMovieMaker Posts: 502
    edited 2010-04-13 17:04
    Dear Sand,
    Thanks for the kind words. Don't Under-Estimate yourself. I taught for 12 years in a University and gess what?
    I learned more from my students than I taught them. I NEVER met a person I could not learn Something from.

    Thanks,

    MovieMaker
  • Mike GreenMike Green Posts: 23,101
    edited 2010-04-13 17:17
    I don't mean to be discouraging. You've already got a lot of pieces working. It may be time to take the pieces, one at a time, and refine them. Start with the lowest levels. Clean up the interfacing and figure out what makes sense for communications on the basic tasks being done. For example, it makes sense to include some object avoidance in the low level maneuvering controller. This would be like the withdrawal reflex from pain in a limb, done in the spinal cord. It can be partially suppressed or modified from higher centers, but is handled mostly (and simply) at the lower level.
  • MovieMakerMovieMaker Posts: 502
    edited 2010-04-13 17:32
    good point, Mike.· I wanted to tie the whole thing to wolfram alpha or some data base like that on the web. But, I am not a guy who is easy to satisfy with normal stuff.· I have been working on this since 1975.· I have not givenup, but I am getting tired.
  • allanlane5allanlane5 Posts: 3,815
    edited 2010-04-13 18:22
    The usual solution to "things become too slow" is to off-load some of the processing to 'satellite' or 'slave' processors in the extremities. So you'll have a 'master' processor to do the database lookups and 'learning' and coordinating, and 'slave' processors. Then you'll have to define and specify a simple protocol to let the 'master' control the 'slave' processors to do what the 'master' wants.

    Then, to test and verify this, you'll need a simple 'master stand-in' processor to test each 'slave' node. Once a 'slave' node is verified to be working, you can then hook up the real 'master' to it. It's also possible to have your PC be the 'brain', and have the 'brain' talk to a 'real-time master' processor. The 'real-time master' then does all the finicky movement commanding of its slave processors.

    Get all this working, and you'll have a multi-processor, multi-tasking platform to work with.
  • MovieMakerMovieMaker Posts: 502
    edited 2010-04-13 18:47
    I was thinking of having a Master computer running WindowsXP. It would do most of the thinking. It would talk Wirelessly to the other cpus that would be controlled by it. Each CPU would do a section of the overall process and return a value and other data to the Master cpu. My problem is not so much the hardware, but the software to do it. You see each individual section runs on a different OS, Master=XP, Obstacle Avoidance=linux, speech rec and generation= java, other stuff Basic and c++. I need to tie all of these to a common area. Since I have asked this question, I have come up with maybe if I tied all of the units up to the Net. Oh, btw, My main database is hopefully going to be Wolfram Alpha. It is a net data base like Google. I would like for the robot to get information from Google, wiki, Wolfram, etc. and process it and give it back to the robot. Maybe I am dreamming too much, but in my minds eye that is what I was thinking of. But, I am only an entry level programmer so far. I have written pro software, but it has been years and years ago. To start off with, I am simply going to put the robot head on top of the Roomba. I am expecting Obstical Avoidance, speech recognition, Speech generation, video recognition, at some point OCR and Object recognition. All of this will be tied into a chatterbox or some sort of AI software brain.

    forgive me if I am being foolish, but project Aiko has done all of this except the obstical avoidance, she is only a body with only wheel chair capabilities. But, she sure does a good job. Her inventor , Mr. T. Lee doesn't really want to share his B.R.A.I.N.S software. Basically, it is all I need to complete my project. I can understand he has worked for 15 years and he doesn't want to give it away. I have tried to buy it, be he doesn't want to share it. It is a pitty because this could help push us closer to Singularity. I am up in age and running out of time. My productivity level decreases daily. But, I am pushing toward my goal.

    Thanks for all of your help.
  • FranklinFranklin Posts: 4,747
    edited 2010-04-13 21:09
    When industry starts a project they develop and troubleshoot a section of the whole until it works flawlessly then tackle another. When all the parts are working they go about combining two or three using wires between them and when that works they combine that into one device and add a few more parts. In the end they have a finished product. What I'm getting at is small steps will keep you from getting overwelmed.

    ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔
    - Stephen
  • MovieMakerMovieMaker Posts: 502
    edited 2010-04-13 21:44
    Makes good sense.
Sign In or Register to comment.