Shop OBEX P1 Docs P2 Docs Learn Events
Autonomous robot's navigation — Parallax Forums

Autonomous robot's navigation

ExDxVExDxV Posts: 29
edited 2014-01-07 03:57 in Robotics
Hello,

Now I work at associative video memory. The method still in developing (now it version 0.5)
but it gives good results already today.

I am dealing with research of computer vision in parallel with my main job
at "Impulse" more than three years (it is my hobby).

About me

In the beginning my achievements were insignificant and little part of ideas has worked properly.
But I did not surrender. I generated big quantity of hypotheses and then test it.

The most ideas did not work indeed but those that worked were similar to particles of gold
in huge quantity of dross. My associative video memory method is working indeed.

============================- Common information -==========================

Algorithm AVM uses a principle of multilevel decomposition of recognition matrices,
it is steady against noise of the camera and well scaled, simply and quickly
for training, also it shows acceptable quick-action on a greater image resolution
of entrance video (960x720 and more). The algorithm works with grayscale images.

The detailed information about AVM algorithm can be looked here:
Associative video memory

AVM SDK v0.5 with examples of using and tests for comparison
of characteristics of the previous and new versions:
http://edv-detail.narod.ru/AVM_SDK_v0-5.zip

Demonstration video how to train AVM:
http://edv-detail.narod.ru/Face_training_demo.avi

AVM demo with the user interface (GUI), installation for Windows:
http://edv-detail.narod.ru/Recognition.zip

Connect the web-camera and start AVM demo after installation of "Recognition.exe".
After starting the program will inform that there is not stored previously data
of training AVM and then will propose to establish the key size of the image
for creation of new copy AVM. Further train AVM using as an example Face_training_demo.avi.

========================- Robot's navigation -=========================

I also want to introduce my first experience in robot's navigation powered by AVM.

Briefly, the navigation algorithm do attempts to align position of a tower
and the body of robot on the center of the first recognized object in the list
of tracking and if the object is far will come nearer and if it is too close it
will be rolled away back.

See video below:
YouTube - Robot's navigation by computer vision (AVM algorithm), experiment 1

YouTube - Robot's navigation by computer vision (AVM algorithm), experiment 2


I have made changes in algorithm of the robot's control
also I have used low resolution of entrance images 320x240 pixels.
And it gave good result (see "Follow me"):
YouTube - Follow me (www.edv-detail.narod.ru)

Robot navigation by gate from point "A" to "B"

See video below:
YouTube - Navigation by gates (www.edv-detail.narod.ru)

YouTube - Navigation by gates (www.edv-detail.narod.ru)


First an user must set the visual beacons (gates) that will show direction where robot has to go.
Robot will walk from gate to gate. If the robot recognize "target" then he come nearer and stop walking.

Navigation application (installation for Windows):
http://edv-detail.narod.ru/Recognition.zip

Navigator package description

The package consists of three parts: the robot control driver, the pattern recognition application (GUI), and a dynamic link library "Navigator".
Compilation of pattern recognition application will need wxWidgets-2.8.x and OpenCV_1.0. If someone has no desire to deal with the GUI, then the project already has compiled recognizer (as EXE) and you will be enough to compile Navigator.dll, which contains the navigation algorithm. Compilation of Navigator.dll needed only library OpenCV_1.0. You can build project by compiler Microsoft Visual C ++ 6.0 (folder vc6.prj) and by compiler Microsoft Visual Studio 2008 (folder vc9.prj).

After installation (and compilation) libraries wxWidgets-2.8.x and OpenCV_1.0 need to specify additional folders for the compiler:

Options / Directories / Include files:
<Install_Dir> \ OPENCV \ CV \ INCLUDE
<Install_Dir> \ OPENCV \ CVAUX \ INCLUDE
<Install_Dir> \ OPENCV \ CXCORE \ INCLUDE
<Install_Dir> \ OPENCV \ OTHERLIBS \ HIGHGUI
<Install_Dir> \ OPENCV \ OTHERLIBS \ CVCAM \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSW
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB \ MSWD
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE
<Install_Dir> \ WXWIDGETS-2.8.10 \ INCLUDE \ MSVC


Options / Directories / Library files:
<Install_Dir> \ OPENCV \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB
<Install_Dir> \ WXWIDGETS-2.8.10 \ LIB \ VC_LIB


Source code of the "Navigator" (for English community) can be downloaded here:
edv-detail.narod.ru/Navigator_src_en.zip

For compilation of the source code you can use the "MS Visual C ++ 2008 Express Edition". It is all official and free.


For connection of robot to Navigator program you have to adapt the control driver (.\src\RobotController) to your robot.

It's simple: the application Recognition.exe interacts with the robot driver "through shared memory (gpKeyArray). And all you need to do - it is a timer (method CMainWnd:: OnTimer) to send commands from the "gpKeyArray" to your robot.

The chain of start commands will be transmitted to robot for "power on" (cmFIRE, cmPOWER) when you start navigation mode. Respectively command "power off" (cmPOWER) will be transmitted when navigation mode will be disabled.

And most importantly: the commands cmLEFT and cmRIGHT should not activate motion in itself but only in combination with the commands "forward", "back" (cmFORWARD, cmBACKWARDS).

If you have adapted control driver to your robot then you are ready join to navigation experiments.

So, let's have a fun together [noparse]:)[/noparse]

Comments

  • ercoerco Posts: 20,257
    edited 2009-12-24 18:45
    Congratulations, very impressive videos! I'll have to take some time and watch all of your many Youtube videos. Your work is much more advanced than mine and most typical forum posts, and you will definitely cause some excitement here with your amazing achievements. What hardware are you using? Da svidanye and S'Rozhdestv
  • ExDxVExDxV Posts: 29
    edited 2009-12-25 06:38
    >> What hardware are you using?

    As the robot I used radio-controlled model of the tank, preliminary adapted to computer control (but description in Russian).

    Video was captured by USB TV Tuner from radio camera that was installed on robot body.
    Navigation program has worked on PC with Intel Core 2 duo E6600 processor.
  • ExDxVExDxV Posts: 29
    edited 2010-01-12 12:43
    For more information (news) see also: forums.trossenrobotics.com/showthread.php?t=3510&page=2
  • ExDxVExDxV Posts: 29
    edited 2010-08-05 08:24
    I made more convenient implementation of algorithm AVM for C# and also I prepared detailed documentation "Library AVM SDK simple.NET".

    Just print file .\AVM_SDK_simple.net\bin\Go.png and show it against camera for recognition in example 2.
  • ExDxVExDxV Posts: 29
    edited 2010-09-20 03:34
    RoboRealm company begins distribution of "Navigator" plugin within RoboRealm software.
  • ercoerco Posts: 20,257
    edited 2010-09-20 08:43
    Yeremeyev: Are you the "third party" who developed the AVM Navigator module? Congratulations! I must learn more about it.

    Eric
  • ExDxVExDxV Posts: 29
    edited 2010-09-20 22:24
    Thanks Eric!

    The Navigator plugin ("third party") is my developing and I made it specially for using within RoboRealm software.
  • ExDxVExDxV Posts: 29
    edited 2011-05-05 04:41
    Now AVM Navigator v0.7 is released and you can download it from RoboRealm website.
    In new version is added two modes: "Marker mode" and "Navigate by map".

    Marker mode

    Navigator_wnd_4.png

    Marker mode provides a forming of navigation map that will be made automatically by space marking. You just should manually lead the robot along some path and repeat it several times for good map detailing.

    Navigation by map

    Navigator_wnd_5.png

    In this mode you should point the target position at the navigation map and further the robot plans the path (maze solving) from current location to the target position (big green circle) and then robot begins automatically walking to the target position.


    Navigation_map_2.png

    For external control of "Navigate by map" mode is added new module variables:

    NV_LOCATION_X - current location X coordinate;
    NV_LOCATION_Y - current location Y coordinate;
    NV_LOCATION_ANGLE - horizontal angle of robot in current location (in radians);


    Target position at the navigation map
    NV_IN_TRG_POS_X - target position X coordinate;
    NV_IN_TRG_POS_Y - target position Y coordinate;

    NV_IN_SUBMIT_POS - submitting of target position (value should be set 0 -> 1 for action).

    Examples

    AVM_Navigator_movie_8.jpg
    Quake 3 Odometry Test

    AVM_Navigator_movie_9.jpg
    Navigation by map

    AVM_Navigator_movie_10.jpg
    Visual Landmark Navigation
  • ExDxVExDxV Posts: 29
    edited 2011-12-02 11:40
    AVM Navigator v0.7.3 is released!

    Navigator package is updated now and you can download next modification of AVM Navigator v0.7.3 from your account link.

    Changes:
    - The new "Back to checkpoint" algorithm was added in "Navigation by map" mode.
    http://www.youtube.com/watch?v=wj-FKhdaU5A

    - Also new "Watching mode" was developed.
    And robot can move to direction where motion was noticed in this mode.
    http://www.youtube.com/watch?v=c1aAcOS6cAg

    Also common usability was improved.

    By the way I received new video from user that succeed in "Navigation by map":
    http://www.youtube.com/watch?v=214MwcHMsTQ

    His robot video and photo:
    http://www.youtube.com/watch?v=S7QRDSfQRps
    http://roboforum.ru/download/file.php?id=22484&mode=view
    http://roboforum.ru/download/file.php?id=22280&mode=view
    http://roboforum.ru/download/file.php?id=22281&mode=view

    I believe that you also will have success with visual navigation by AVM Navigator module ;-)

    >>><<<

    Yet another video from user whose robot has extremely high turn speed but AVM Navigator module could control robot navigation even in this bad condition!
    http://www.youtube.com/watch?v=G7SB_jKAcyE

    His robot video:
    http://www.youtube.com/watch?v=FJCrLz08DaQ
  • ExDxVExDxV Posts: 29
    edited 2012-02-04 08:30
    It is a testing of new robot for AVM Navigator project:
    http://www.youtube.com/watch?v=F3u0rTNBCuA
  • ercoerco Posts: 20,257
    edited 2012-02-04 10:01
    Beautiful job! Amazing robot navigation.

    BTW, your robot's voltage display reminded me to update an old thread I started: http://forums.parallax.com/showthread.php?132766-2.46-Battery-Voltage-Digital-Readout&p=1071744&viewfull=1#post1071744 :)
  • Martin_HMartin_H Posts: 4,051
    edited 2012-02-04 17:40
    Neat project. I have a fascination with machine vision systems. I've only done simple stuff though, nothing this fancy.
  • ExDxVExDxV Posts: 29
    edited 2012-04-07 07:56
    Here is complete solution of object tracking.

    I tested it on the prototype of “Twinky rover” and it works fine:

    I also added “Learn from motion” option to “Object recognition” mode and you should download RoboRealm package
    with new AVM Navigator v0.7.4.

    >> Hi, how do I update AVM Navigator? I purchased it last version.

    You had to receive download link after registration on RoboRealm site.
    Just use this link for downloading recent version of RoboRealm package with AVM Navigator.

    You could also start RoboRealm application and then click "Option" button and further click "Download Upgrade".

    Further you should set "Learn from motion" checkbox with "Object recognition" mode in AVM Navigator dialog window.
    plingboot wrote:
    I've had a 'fiddle' with AVM navigator and managed to teach it a few objects/faces, but not the first idea how to turn that into tracking commands…
    It is easy. You should just use "Object recognition mode" in AVM Navigator module.
    First clear the AVM search tree by click on button "Set key image size(New)" and further press "Yes".

    Now you can train AVM on some faces like in video below:

    When training will be done you should use variables that described below for your VBScript program:

    NV_OBJECTS_TOTAL - total number of recognized objects
    NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
    NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
    NV_ARR_OBJ_RECT_W - width of recognized object
    NV_ARR_OBJ_RECT_H - height of recognized object

    As example you can use these VBScript programs that was published in this topics:
    http://www.roborealm.com/forum/index.php?thread_id=3881#
    http://forums.trossenrobotics.com/showthread.php?4764-Using-of-AVM-plugin-in-RoboRealm&p=48865#post48865
  • ExDxVExDxV Posts: 29
    edited 2012-04-07 08:34
    Okay guys :)
    I have some pretty videos from user that made testing of AVM Navigator on his robot and I also share it with you:

    Navigation by map:

    Follow me in “Navigate mode”:

    Watching mode:
  • ExDxVExDxV Posts: 29
    edited 2012-07-28 16:33
    It is enough difficult route that was passed by robot with help AVM Navigator (route training and passing):


    Autonomous navigation view from outside:
  • ExDxVExDxV Posts: 29
    edited 2012-07-31 00:39
    Twinky rover and fruit (color tracking with RoboRealm)
  • ercoerco Posts: 20,257
    edited 2012-07-31 08:50
    You've been busy, ExDxV! Great job, so your navigation tests are all using camera and Roborealm, no sonar, is that right?
  • ExDxVExDxV Posts: 29
    edited 2012-08-12 10:32
    Yes, you are right :)
    You can find the short description about how it works on Wikipedia.
  • ExDxVExDxV Posts: 29
    edited 2012-10-08 03:44
  • ercoerco Posts: 20,257
    edited 2012-10-08 04:11
    Impressive and a bit scary!

    This is exactly what you (the target) will look like to the Terminator right before he kills you on Judgement Day. :)
  • ExDxVExDxV Posts: 29
    edited 2012-10-09 11:24
    It was good joke ;-)

    Fun with AVM Navigator

    mqdefault.jpg

    It's little demo of object recognition and learning from motion with helping of AVM Navigator.

    All object rectangle coordinates are available in RoboRealm pipeline from external variables:
    NV_ARR_OBJ_RECT_X - left-top corner X coordinate of recognized object
    NV_ARR_OBJ_RECT_Y - left-top corner Y coordinate of recognized object
    NV_ARR_OBJ_RECT_W - width of recognized object
    NV_ARR_OBJ_RECT_H - height of recognized object

    So you can use it in your VBScript program.

    See here for more details.
  • ExDxVExDxV Posts: 29
    edited 2012-10-11 09:34
    alan wrote:
    Hi,

    I noticed in the youtube video that you jiggle the object (your face or the book) slightly during the learrning phase. Is this an advantage?

    Cheers,

    Alan

    In fact the AVM algorithm is not invariance to rotation and you should show the object for memorizing to AVM search tree under different angles during training for further correct recognition.

    See also an example of using of Canny module as background for AVM Navigator:

    mqdefault.jpg
  • ExDxVExDxV Posts: 29
    edited 2012-10-16 09:32
  • ExDxVExDxV Posts: 29
    edited 2014-01-06 06:23
    Hi guys,

    I'm still working over AVM technology. Now I've founded my own company that is named Invarivision.com.
    We are small but passionate team of developers that are working over system which would be able watch TV and recognize video that interests user.

    And we need your help!

    It seems that interface of our search system is good enough because we try to make it to be simple and user friendly but from other point of view it could be a total disaster.

    Could you please take a look to our system and then tell us about good and bad sides of this?

    The constructive criticism is welcome.

    With kind regards, EDV.
  • ExDxVExDxV Posts: 29
    edited 2014-01-07 03:57
    Cbenson wrote:
    This is quite different from autonomous robot navigation.
    Our search system also is the robot that watches TV like a human but does it simultaneously on several TV channels and this robot can watch TV nonstop and do not to get tired :)
    Cbenson wrote:
    Can you explain more about how the system would determine what video interests the user?
    The user just uploads video which interests him to the search system and then system would search it on all channels that are scanned.

    Also user can add own TV channel that interests him if system still does not have such channel.

    So, in other words: AVM Video Search system provides service that allows customers to make audit of TV channels or searching of forbidden or copyrighted video in file with help of automatically recognition of the video content.

    The main advantage of this system is direct recognition of separate images into analyzing of video content that is provide searching of very small video fragments about two seconds long.
  • I've prepared InvariMatch presentation:

    I'm sorry for my English in this presentation I was try my best :)

    It's hard to believe that all this stuff has grown up from robot navigation (there is the same Associative Video Memory algorithm that is used for matching of video content).

  • ercoerco Posts: 20,257
    Fantastic! Your English is perfect compared to my Russky, tovarisch!
Sign In or Register to comment.