Sounds very interesting but of course it's "Impossible" (tm) on the Propeller.
Just in case it's not obvious to IFFI, this was a joke since impossible things are frequently done on the Propeller.
The Propeller is a great microcontroller for this sort of project.
I haven't done much with C yet but I have lots of links to Spin and PASM tutorials in post #3 of my index. Links to Propeller powered machine vision are listed in post #4.
why is this impossible on ActivityBot Robot? because some where on the internet I had read about the implementation on this robot. I am just asking the way to proceed.
why is this impossible on ActivityBot Robot? because some where on the internet I had read about the implementation on this robot. I am just asking the way to proceed.
Sorry. I didn't *really* mean it was impossible. I'm sure it can be done. It's just that frequently when someone on the forum said that something was impossible on the Propeller, that acted as a challenge for someone to prove the statement wrong. Almost inevitably, that would happen and the supposedly impossible thing would be implemented on the Propeller to prove the original statement wrong. My post was intended as a challenge for someone to do that with the SLAM algorithm. I wouldn't be surprised if someone here has it implemented in short order. :-)
Not all jokes are received as intended, it's a cultural thing. David, you got off easy compared to this guy:
I saw that there was a new post in this thread and got really excited that my statement had been proven false already and that there was now a SLAM implementation for the Propeller. Imagine my disappointment when I found that it was only another bad joke like mine! :-)
Duane's your Huckleberry. He's prolly working on it right now!
Actually I am working on it that's why I need some helpful guideline from anyone......until this time, I am getting unsatisfactory response except some jokes.
Actually I am working on it that's why I need some helpful guideline from anyone......until this time, I am getting unsatisfactory response except some jokes.
It looks to be a fairly big project. Maybe if you ask more specific questions about the things that are giving you trouble we'll be able to help more.
I am getting unsatisfactory response except some jokes.
You asked a very broad question and in post #3 and post #5 you received some broad reponses which in my opinion were as about as helpful as could be expected with the information you provided.
The link Andy provided in post #3 leads to lots of great information about SLAM. Some things I learned were:
It is a complex task to estimate the robot's current location without a map or without a directional reference.
Researchers and experts in artificial intelligence struggled to solve the "SLAM problem"
By following a link to the "SLAM for Dummies" book, I learned:
The first step in the SLAM process is to obtain data about the surroundings of the
robot.
I thought my list of machine vision links would be helpful when attempting to "obtain data about the surroundings".
An important aspect of SLAM is the odometry data.
Fortunately the ActivityBot includes encoders but monitoring ones position based on encoder feedback is a challenge in itself.
Phil Pilgrim wrote a great article on how to use encoders with a BOE-Bot. The article was written with the BOE-Bot in mind but the principles apply to any robot with encoders.
We often talk about ways of figuring out where ones robot is located but I'm not aware of any full fledged implementation of SLAM using the Propeller. It's an extremely complicated and broad topic.
You haven't provided us with much information either.
What language do you plan to use? What sensors do you plan to use? What's you're level of experience?
And probably most important, what have you tried yourself so far?
However it requires attaching a computer to the ActivityBot, so it is not very mobile. It was just for testing before I built an ArloBot.
Here is what I found about SLAM in my research:
1. There are a lot more papers about the theory than actual implementations.
2. The implementations that do exist are:
a. Very CPU intensive
b. Require high resolution and high frequency data
The reason I got interested in ROS was because they have packaged up SLAM implementations that run on a PC.
However, these ROS implementations of SLAM are far too CPU intensive for a Propeller board, or even a Raspberry Pi. They struggle on a low end PC.
Also, they need either an expensive sensor ($1,5000 and up) or some "fake laser scan" data from a Kinect or ASUS Xtion, both of which are obtainable, but a little bulky to mount on the ActivityBot.
So the short answer is that all of the currently available implementations of SLAM that I have found will not work with a Propeller chip. They need a PC with a modern CPU.
The long answer is that the field is wide open and people are writing new things every day. However, my understanding of SLAM is that it is a sort of statistical analysis algorithm. Statistical analysis with large data sets is notoriously heavy on CPU and RAM.
Anything is possible but it depends on your personal skills and what you like to do.
Personally I enjoy using various components together such that each does what it is best at, or at least what I am best at using it for.
Would it be of any use if the robot could send a signal to a stationary "beacon" that would immediately respond
With another omnidirectional signal, followed by a rotating (and possibly frequency ramped) directional signal?
The response delay or intensity would indicate distance from the beacon,
And the time between the response and directional signal (or it's frequency) would indicate which direction it came from.
Then map using these polar coordinates.
In simple situations the signals could be light or sound, otherwise radio.
Q: can anyone guide me about Activity Bot Robot 32500. The thing is I want to implement SLAM technique on ActivityBot Robot.???
Hello IFFI and Welcome,
Please keep us updated on any progress you make. This is a very interesting topic because a truly autonomous robot would need some type of location-mapping-precise sensor capability. The ActivityBot has an SD card that might be able to hold a crude map. There is a SF02 Laser Rangefinder but regardless of the sensor used (ultrasonic, infrared, laser) it will need to find edges of objects, openings, walls, etc. The list goes on. Dom...
Comments
You will have to describe this "SLAM' technique, or provide a link.
Posts on the forums are normally answered on the forums so others can benefit from the answers. So do not expect an answer via email.
Just in case it's not obvious to IFFI, this was a joke since impossible things are frequently done on the Propeller.
The Propeller is a great microcontroller for this sort of project.
I haven't done much with C yet but I have lots of links to Spin and PASM tutorials in post #3 of my index. Links to Propeller powered machine vision are listed in post #4.
Not all jokes are received as intended, it's a cultural thing. David, you got off easy compared to this guy:
You asked a very broad question and in post #3 and post #5 you received some broad reponses which in my opinion were as about as helpful as could be expected with the information you provided.
The link Andy provided in post #3 leads to lots of great information about SLAM. Some things I learned were:
By following a link to the "SLAM for Dummies" book, I learned:
I thought my list of machine vision links would be helpful when attempting to "obtain data about the surroundings".
Fortunately the ActivityBot includes encoders but monitoring ones position based on encoder feedback is a challenge in itself.
Phil Pilgrim wrote a great article on how to use encoders with a BOE-Bot. The article was written with the BOE-Bot in mind but the principles apply to any robot with encoders.
We often talk about ways of figuring out where ones robot is located but I'm not aware of any full fledged implementation of SLAM using the Propeller. It's an extremely complicated and broad topic.
You haven't provided us with much information either.
What language do you plan to use? What sensors do you plan to use? What's you're level of experience?
And probably most important, what have you tried yourself so far?
What I did was create an interface to allow the use of Robot Operating System (ROS) http://www.ros.org/ with the ActivityBot.
All of my work is here: https://github.com/chrisl8/ActivityBot
However it requires attaching a computer to the ActivityBot, so it is not very mobile. It was just for testing before I built an ArloBot.
Here is what I found about SLAM in my research:
1. There are a lot more papers about the theory than actual implementations.
2. The implementations that do exist are:
a. Very CPU intensive
b. Require high resolution and high frequency data
The reason I got interested in ROS was because they have packaged up SLAM implementations that run on a PC.
However, these ROS implementations of SLAM are far too CPU intensive for a Propeller board, or even a Raspberry Pi. They struggle on a low end PC.
Also, they need either an expensive sensor ($1,5000 and up) or some "fake laser scan" data from a Kinect or ASUS Xtion, both of which are obtainable, but a little bulky to mount on the ActivityBot.
So the short answer is that all of the currently available implementations of SLAM that I have found will not work with a Propeller chip. They need a PC with a modern CPU.
The long answer is that the field is wide open and people are writing new things every day. However, my understanding of SLAM is that it is a sort of statistical analysis algorithm. Statistical analysis with large data sets is notoriously heavy on CPU and RAM.
Anything is possible but it depends on your personal skills and what you like to do.
Personally I enjoy using various components together such that each does what it is best at, or at least what I am best at using it for.
With another omnidirectional signal, followed by a rotating (and possibly frequency ramped) directional signal?
The response delay or intensity would indicate distance from the beacon,
And the time between the response and directional signal (or it's frequency) would indicate which direction it came from.
Then map using these polar coordinates.
In simple situations the signals could be light or sound, otherwise radio.
Please keep us updated on any progress you make. This is a very interesting topic because a truly autonomous robot would need some type of location-mapping-precise sensor capability. The ActivityBot has an SD card that might be able to hold a crude map. There is a SF02 Laser Rangefinder but regardless of the sensor used (ultrasonic, infrared, laser) it will need to find edges of objects, openings, walls, etc. The list goes on. Dom...