Autonomous robot's navigation
Hello,
Now I work at associative video memory. The method still in developing (now it version 0.5)
but it gives good results already today.
I am dealing with research of computer vision in parallel with my main job
at "Impulse" more than three years (it is my hobby).
About me
In the beginning my achievements were insignificant and little part of ideas has worked properly.
But I did not surrender. I generated big quantity of hypotheses and then test it.
The most ideas did not work indeed but those that worked were similar to particles of gold
in huge quantity of dross. My associative video memory method is working indeed.
============================- Common information -==========================
Algorithm AVM uses a principle of multilevel decomposition of recognition matrices,
it is steady against noise of the camera and well scaled, simply and quickly
for training, also it shows acceptable quick-action on a greater image resolution
of entrance video (960x720 and more). The algorithm works with grayscale images.
The detailed information about AVM algorithm can be looked here:
Associative video memory
AVM SDK v0.5 with examples of using and tests for comparison
of characteristics of the previous and new versions:
http://edv-detail.narod.ru/AVM_SDK_v0-5.zip
Demonstration video how to train AVM:
http://edv-detail.narod.ru/Face_training_demo.avi
AVM demo with the user interface (GUI), installation for Windows:
http://edv-detail.narod.ru/Recognition.zip
Connect the web-camera and start AVM demo after installation of "Recognition.exe".
After starting the program will inform that there is not stored previously data
of training AVM and then will propose to establish the key size of the image
for creation of new copy AVM. Further train AVM using as an example Face_training_demo.avi.
========================- Robot's navigation -=========================
I also want to introduce my first experience in robot's navigation powered by AVM.
Briefly, the navigation algorithm do attempts to align position of a tower
and the body of robot on the center of the first recognized object in the list
of tracking and if the object is far will come nearer and if it is too close it
will be rolled away back.
See video below:
YouTube - Robot's navigation by computer vision (AVM algorithm), experiment 1
YouTube - Robot's navigation by computer vision (AVM algorithm), experiment 2
I have made changes in algorithm of the robot's control
also I have used low resolution of entrance images 320x240 pixels.
And it gave good result (see "Follow me"):
YouTube - Follow me (www.edv-detail.narod.ru)
Robot navigation by gate from point "A" to "B"
See video below:
YouTube - Navigation by gates (www.edv-detail.narod.ru)
YouTube - Navigation by gates (www.edv-detail.narod.ru)
First an user must set the visual beacons (gates) that will show direction where robot has to go.
Robot will walk from gate to gate. If the robot recognize "target" then he come nearer and stop walking.
Navigation application (installation for Windows):
http://edv-detail.narod.ru/Recognition.zip
Navigator package description
The package consists of three parts: the robot control driver, the pattern recognition application (GUI), and a dynamic link library "Navigator".
Compilation of pattern recognition application will need wxWidgets-2.8.x and OpenCV_1.0. If someone has no desire to deal with the GUI, then the project already has compiled recognizer (as EXE) and you will be enough to compile Navigator.dll, which contains the navigation algorithm. Compilation of Navigator.dll needed only library OpenCV_1.0. You can build project by compiler Microsoft Visual C ++ 6.0 (folder vc6.prj) and by compiler Microsoft Visual Studio 2008 (folder vc9.prj).
After installation (and compilation) libraries wxWidgets-2.8.x and OpenCV_1.0 need to specify additional folders for the compiler:
Options / Directories / Include files:
<Install_Dir> \ OPENCV \ CV \ INCLUDE
<Install_Dir> \ OPENCV \ CVAUX \ INCLUDE
<Install_Dir> \ OPENCV \ CXCORE \ INCLUDE
<Install_Dir> \ OPENCV \ OTHERLIBS \ HIGHGUI
<Install_Dir> \ OPENCV \ OTHERLIBS \ CVCAM \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSW
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSWD
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE \ MSVC
Options / Directories / Library files:
<Install_Dir> \ OPENCV \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB
Source code of the "Navigator" (for English community) can be downloaded here:
edv-detail.narod.ru/Navigator_src_en.zip
For compilation of the source code you can use the "MS Visual C ++ 2008 Express Edition". It is all official and free.
For connection of robot to Navigator program you have to adapt the control driver (.\src\RobotController) to your robot.
It's simple: the application Recognition.exe interacts with the robot driver "through shared memory (gpKeyArray). And all you need to do - it is a timer (method CMainWnd:: OnTimer) to send commands from the "gpKeyArray" to your robot.
The chain of start commands will be transmitted to robot for "power on" (cmFIRE, cmPOWER) when you start navigation mode. Respectively command "power off" (cmPOWER) will be transmitted when navigation mode will be disabled.
And most importantly: the commands cmLEFT and cmRIGHT should not activate motion in itself but only in combination with the commands "forward", "back" (cmFORWARD, cmBACKWARDS).
If you have adapted control driver to your robot then you are ready join to navigation experiments.
So, let's have a fun together [noparse]:)[/noparse]
Now I work at associative video memory. The method still in developing (now it version 0.5)
but it gives good results already today.
I am dealing with research of computer vision in parallel with my main job
at "Impulse" more than three years (it is my hobby).
About me
In the beginning my achievements were insignificant and little part of ideas has worked properly.
But I did not surrender. I generated big quantity of hypotheses and then test it.
The most ideas did not work indeed but those that worked were similar to particles of gold
in huge quantity of dross. My associative video memory method is working indeed.
============================- Common information -==========================
Algorithm AVM uses a principle of multilevel decomposition of recognition matrices,
it is steady against noise of the camera and well scaled, simply and quickly
for training, also it shows acceptable quick-action on a greater image resolution
of entrance video (960x720 and more). The algorithm works with grayscale images.
The detailed information about AVM algorithm can be looked here:
Associative video memory
AVM SDK v0.5 with examples of using and tests for comparison
of characteristics of the previous and new versions:
http://edv-detail.narod.ru/AVM_SDK_v0-5.zip
Demonstration video how to train AVM:
http://edv-detail.narod.ru/Face_training_demo.avi
AVM demo with the user interface (GUI), installation for Windows:
http://edv-detail.narod.ru/Recognition.zip
Connect the web-camera and start AVM demo after installation of "Recognition.exe".
After starting the program will inform that there is not stored previously data
of training AVM and then will propose to establish the key size of the image
for creation of new copy AVM. Further train AVM using as an example Face_training_demo.avi.
========================- Robot's navigation -=========================
I also want to introduce my first experience in robot's navigation powered by AVM.
Briefly, the navigation algorithm do attempts to align position of a tower
and the body of robot on the center of the first recognized object in the list
of tracking and if the object is far will come nearer and if it is too close it
will be rolled away back.
See video below:
YouTube - Robot's navigation by computer vision (AVM algorithm), experiment 1
YouTube - Robot's navigation by computer vision (AVM algorithm), experiment 2
I have made changes in algorithm of the robot's control
also I have used low resolution of entrance images 320x240 pixels.
And it gave good result (see "Follow me"):
YouTube - Follow me (www.edv-detail.narod.ru)
Robot navigation by gate from point "A" to "B"
See video below:
YouTube - Navigation by gates (www.edv-detail.narod.ru)
YouTube - Navigation by gates (www.edv-detail.narod.ru)
First an user must set the visual beacons (gates) that will show direction where robot has to go.
Robot will walk from gate to gate. If the robot recognize "target" then he come nearer and stop walking.
Navigation application (installation for Windows):
http://edv-detail.narod.ru/Recognition.zip
Navigator package description
The package consists of three parts: the robot control driver, the pattern recognition application (GUI), and a dynamic link library "Navigator".
Compilation of pattern recognition application will need wxWidgets-2.8.x and OpenCV_1.0. If someone has no desire to deal with the GUI, then the project already has compiled recognizer (as EXE) and you will be enough to compile Navigator.dll, which contains the navigation algorithm. Compilation of Navigator.dll needed only library OpenCV_1.0. You can build project by compiler Microsoft Visual C ++ 6.0 (folder vc6.prj) and by compiler Microsoft Visual Studio 2008 (folder vc9.prj).
After installation (and compilation) libraries wxWidgets-2.8.x and OpenCV_1.0 need to specify additional folders for the compiler:
Options / Directories / Include files:
<Install_Dir> \ OPENCV \ CV \ INCLUDE
<Install_Dir> \ OPENCV \ CVAUX \ INCLUDE
<Install_Dir> \ OPENCV \ CXCORE \ INCLUDE
<Install_Dir> \ OPENCV \ OTHERLIBS \ HIGHGUI
<Install_Dir> \ OPENCV \ OTHERLIBS \ CVCAM \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSW
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSWD
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE \ MSVC
Options / Directories / Library files:
<Install_Dir> \ OPENCV \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB
Source code of the "Navigator" (for English community) can be downloaded here:
edv-detail.narod.ru/Navigator_src_en.zip
For compilation of the source code you can use the "MS Visual C ++ 2008 Express Edition". It is all official and free.
For connection of robot to Navigator program you have to adapt the control driver (.\src\RobotController) to your robot.
It's simple: the application Recognition.exe interacts with the robot driver "through shared memory (gpKeyArray). And all you need to do - it is a timer (method CMainWnd:: OnTimer) to send commands from the "gpKeyArray" to your robot.
The chain of start commands will be transmitted to robot for "power on" (cmFIRE, cmPOWER) when you start navigation mode. Respectively command "power off" (cmPOWER) will be transmitted when navigation mode will be disabled.
And most importantly: the commands cmLEFT and cmRIGHT should not activate motion in itself but only in combination with the commands "forward", "back" (cmFORWARD, cmBACKWARDS).
If you have adapted control driver to your robot then you are ready join to navigation experiments.
So, let's have a fun together [noparse]:)[/noparse]
Comments
As the robot I used radio-controlled model of the tank, preliminary adapted to computer control (but description in Russian).
Video was captured by USB TV Tuner from radio camera that was installed on robot body.
Navigation program has worked on PC with Intel Core 2 duo E6600 processor.
Just print file .\AVM_SDK_simple.net\bin\Go.png and show it against camera for recognition in example 2.
Eric
The Navigator plugin ("third party") is my developing and I made it specially for using within RoboRealm software.
In new version is added two modes: "Marker mode" and "Navigate by map".
Marker mode
Marker mode provides a forming of navigation map that will be made automatically by space marking. You just should manually lead the robot along some path and repeat it several times for good map detailing.
Navigation by map
In this mode you should point the target position at the navigation map and further the robot plans the path (maze solving) from current location to the target position (big green circle) and then robot begins automatically walking to the target position.
For external control of "Navigate by map" mode is added new module variables:
NV_LOCATION_X - current location X coordinate;
NV_LOCATION_Y - current location Y coordinate;
NV_LOCATION_ANGLE - horizontal angle of robot in current location (in radians);
Target position at the navigation map
NV_IN_TRG_POS_X - target position X coordinate;
NV_IN_TRG_POS_Y - target position Y coordinate;
NV_IN_SUBMIT_POS - submitting of target position (value should be set 0 -> 1 for action).
Examples
Quake 3 Odometry Test
Navigation by map
Visual Landmark Navigation
Navigator package is updated now and you can download next modification of AVM Navigator v0.7.3 from your account link.
Changes:
- The new "Back to checkpoint" algorithm was added in "Navigation by map" mode.
http://www.youtube.com/watch?v=wj-FKhdaU5A
- Also new "Watching mode" was developed.
And robot can move to direction where motion was noticed in this mode.
http://www.youtube.com/watch?v=c1aAcOS6cAg
Also common usability was improved.
By the way I received new video from user that succeed in "Navigation by map":
http://www.youtube.com/watch?v=214MwcHMsTQ
His robot video and photo:
http://www.youtube.com/watch?v=S7QRDSfQRps
http://roboforum.ru/download/file.php?id=22484&mode=view
http://roboforum.ru/download/file.php?id=22280&mode=view
http://roboforum.ru/download/file.php?id=22281&mode=view
I believe that you also will have success with visual navigation by AVM Navigator module ;-)
>>><<<
Yet another video from user whose robot has extremely high turn speed but AVM Navigator module could control robot navigation even in this bad condition!
http://www.youtube.com/watch?v=G7SB_jKAcyE
His robot video:
http://www.youtube.com/watch?v=FJCrLz08DaQ
http://www.youtube.com/watch?v=F3u0rTNBCuA
BTW, your robot's voltage display reminded me to update an old thread I started: http://forums.parallax.com/showthread.php?132766-2.46-Battery-Voltage-Digital-Readout&p=1071744&viewfull=1#post1071744
I tested it on the prototype of Twinky rover and it works fine:
I also added Learn from motion option to Object recognition mode and you should download RoboRealm package
with new AVM Navigator v0.7.4.
>> Hi, how do I update AVM Navigator? I purchased it last version.
You had to receive download link after registration on RoboRealm site.
Just use this link for downloading recent version of RoboRealm package with AVM Navigator.
You could also start RoboRealm application and then click "Option" button and further click "Download Upgrade".
Further you should set "Learn from motion" checkbox with "Object recognition" mode in AVM Navigator dialog window.
It is easy. You should just use "Object recognition mode" in AVM Navigator module.
First clear the AVM search tree by click on button "Set key image size(New)" and further press "Yes".
Now you can train AVM on some faces like in video below:
When training will be done you should use variables that described below for your VBScript program:
NV_OBJECTS_TOTAL - total number of recognized objects
NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
NV_ARR_OBJ_RECT_W - width of recognized object
NV_ARR_OBJ_RECT_H - height of recognized object
As example you can use these VBScript programs that was published in this topics:
http://www.roborealm.com/forum/index.php?thread_id=3881#
http://forums.trossenrobotics.com/showthread.php?4764-Using-of-AVM-plugin-in-RoboRealm&p=48865#post48865
I have some pretty videos from user that made testing of AVM Navigator on his robot and I also share it with you:
Navigation by map:
Follow me in Navigate mode:
Watching mode:
Autonomous navigation view from outside:
You can find the short description about how it works on Wikipedia.
Changes:
- The indication drawing was carried to ::Annotate method
- Into camera view was added 3D marker of target position of robot
...
See here about all other changes.
This is exactly what you (the target) will look like to the Terminator right before he kills you on Judgement Day.
Fun with AVM Navigator
It's little demo of object recognition and learning from motion with helping of AVM Navigator.
All object rectangle coordinates are available in RoboRealm pipeline from external variables:
NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
NV_ARR_OBJ_RECT_W - width of recognized object
NV_ARR_OBJ_RECT_H - height of recognized object
So you can use it in your VBScript program.
See here for more details.
In fact the AVM algorithm is not invariance to rotation and you should show the object for memorizing to AVM search tree under different angles during training for further correct recognition.
See also an example of using of Canny module as background for AVM Navigator:
I'm still working over AVM technology. Now I've founded my own company that is named Invarivision.com.
We are small but passionate team of developers that are working over system which would be able watch TV and recognize video that interests user.
And we need your help!
It seems that interface of our search system is good enough because we try to make it to be simple and user friendly but from other point of view it could be a total disaster.
Could you please take a look to our system and then tell us about good and bad sides of this?
The constructive criticism is welcome.
With kind regards, EDV.
The user just uploads video which interests him to the search system and then system would search it on all channels that are scanned.
Also user can add own TV channel that interests him if system still does not have such channel.
So, in other words: AVM Video Search system provides service that allows customers to make audit of TV channels or searching of forbidden or copyrighted video in file with help of automatically recognition of the video content.
The main advantage of this system is direct recognition of separate images into analyzing of video content that is provide searching of very small video fragments about two seconds long.
I'm sorry for my English in this presentation I was try my best
It's hard to believe that all this stuff has grown up from robot navigation (there is the same Associative Video Memory algorithm that is used for matching of video content).